Chuck McManis

Hello Internet visitor, welcome to my home page.

If you are wondering, yes I am the guy that comments as ChuckMcM on Hacker News. And I sometimes tweet as @ChuckMcManis. I have tried to stay away from the controversial issues but I can’t always. Mostly I’m all about the nerdy stuff.

If you are wondering who I am, I am a father, a husband, an engineer, and a life long learner. I studied Electrical Engineering at USC during Dean Silverman’s tenure, and after graduating with a BSEE moved to the valley of the Nerds (aka Silicon Valley, aka Santa Clara Valley) in the mid 1980’s to work at a chip company that was changing the world Intel.

If you want a boring breakdown of what happened since you could look at my LinkedIn profile.

Other ways that people have come across me;

My passion though, has always been systems, the more complex the better. Even when I was in elementary school I loved to take apart complex mechanisms. I would carefully dissect alarm clocks and wind up toys, observing how each part contributed to the rest of the whole.

Software Defined Radios

In 2015, I discovered that you could get a software defined radio for $25 and that there was a vibrant community around this cheap gizmo. I had heard about SDR from Matt Ettus at a conference in 2007 but it was $2000 for the cheapest version, and that was way more than I would spend to “experiement.”

For $25 though, I was hooked, but quickly found this simple radio didn’t do what I wanted (no ability to transmit) and on the advice of a friend I got the HackRF One. And as fate would have got to meet its creator, Michael Ossman. That lead me to still more radios of different kinds, and now I’ve got over a dozen different SDRs with a variety of capabilities, strenghs, and weaknesses.

Software based radios are an excellent systems puzzle. The balance between hardware, software, frequency, and complexity leads to many interesting solution spaces. That wireless was “taking off” was just bonus. Building SDR systems has occupied me from late 2017 to late 2021.

In between consulting and writing on management, technology, and standards, I have upgraded my own workshop to include things like vector signal generators and spectrum analyzers. These were tools I had no use for building computer systems but they become essential when building radios.

The other thing SDR inspired me to do was to get back into Amateur Radio, and with some work and a lot of studying I earned my extra license (AI6ZR). The combination of SDR and an Extra class license has enabled me to explore radio much more thoroughly than I was able to as a Novice back in the late 70’s.

Systems over the years

In the mid-80’s I joined Sun Microsystems. I worked in the Systems Group (which was responsible for SunOS as a member of the Basic ONC group (BONC). It was at Sun where the complexity of distributed systems really sunk in. I was the Open Network Computing architect and I am still the assigned number authority for port 111 in the IP networking stack. At Sun I designed NIS+ a secure, extensibile, name service for enterprise networks, this service was used all around the world in a huge number of systems. It would not have made it out of Sun however if I hadn’t been helped by some really great engineers to make it real.

When I helped start FreeGate, the company designed an “all in one” internet appliance. Today you would think of it as an Internet gateway although it was more complete than that, providing email, VPN, NAS, FTP, and DNS services. That system taught me the value of building both the configuration management elements as well as the service elements at the same time to work together. It’s worse feature, as far as VARs were concerned, was that once they installed it the customer no longer called them out for service calls. It just worked.

At Network Appliance (NetApp) I had the mission of creating a NAS system that could scale, unconstrained by scaling limitations on microprocessors. The crown jewel of NetApp at the time was a real time always consistent file system called WAFL that was tightly integrated with the Sun ONC protocol NFS and a home grown storage managment layer employing RAID techniques.

Scalable, distributed, file systems are an amazing interconnections of state changes in time interacting on unreliable hardware. It was a glorious problem to work on and through.

By the end of my tenure there, the problem had effectively been decomposed into what became known as a three layer cake, with naming (files are the original naming problem in CS), file system semantics, and storage reliabilty assurance (aka RAID), as the primary layers. A really talented engineer named John Wall implemented a split between the file system and RAID layers which demonstrated 60% performance improvement with the same number of disk drives. This was an important piece of work because it demonstrated that the system was not constrained by I/O operations, rather it was constrained by how rapidly it could resolve the necessary operations needed to get to the next stable state (called a checkpoint in WAFL).

At Google, and later at Blekko, the interesting system challenge was how to compose a reliable system out of unreliable hardware that could achieve 100% uptime (or ‘no nines’).

I felt like the keys to 100% uptime are written in multi-cellular biology. Generally things like your kidney keeps working even when it takes damage from an injury or disease. This, to me is due in large part to systems that constantly look for, and then repair, damage to the cells.

I got to build a system like this at Google that would constantly look for bad disks and repair them immediately as it found them, and commit them to futher repair if they were completely dead. It worked remarkably well for such a simple concept and provided immediate, and significant, operational cost benefits in the infrastructure.

When I came to Blekko we built further on those concepts, Greg Lindahl called them ‘micro fractures’, by monitoring as many system health parameters as we could think of and using that to inform and drive a set of processes that provided corrective action or ‘gardening’ the cluster. At the time IBM acquired Blekko I felt that was a much more compelling technology for companies implementing infrastructure in the ‘cloud’ than the web crawler was. Sadly, I don’t think any of that made it into IBM’s cloud efforts.

Anecdotes

I have a number of experiences that have shaped who I am and how I look at the world. I’ve collected a few here and will add to them as time permits.

Building NIS+

I had been tasked with replacing Sun’s naming service (known variously as “YP”, “Yellow Pages”, “Network Information Service”, and “NIS”) which was simple, massively insecure, and essential to managing groups of machines.

Naming, is one of the ‘deep problems’ in computers, it comes up in a surprising number of ways. Network naming of things has similar complexity.

There were a bit more than three network naming paradigms around when I started designing what was to become NIS+. These were DNS (Domain Name Service), XNS (Xerox Name Service, also known by the name Grapevine), and Yellow Pages (known as YP). There was also an effort underway at OSI to define the answer to all naming problems, referred to by its specification of X.500.

At the time (and to some extent today) people couldn’t decide if a naming service was a database or a directed graph. I believe that confusion arose out of the way in which people or applications interacted with the system. DNS was a system of essentially ‘named leaves’ in a singly rooted tree (also insecure btw), and X.500 was a database of tables with a selection syntax that was more common with NoSQL queries today than it was with name service requests.

NIS+ brought three innovations to market that hadn’t existed before, first it provided a namespace for tables. So the password table for a given namespace might be passwd.foo.com.. Second it introduced a trust model where key signing and exchanging allowed for portions of the global tree to trust each other without trusting the entire tree. So department1.foo.com could trust servers in department2.foo.com. if both trusted foo.com. And finally it added a multi-master transactional log style that used the notion of ‘network relative time’ to resolve multiple updates rather than a globally synchronized clock.

An interest in engineering

Why does someone become an engineer? How do we encourage kids into STEM careers? Why are some groups much less likely to choose engineering as a career? These are all important questions that I think about both as father and as a hiring manager. I don’t have answers but I do know why I became an engineer.

As a kid I experimented with a real chemistry set and explored the interaction of chemicals (both endothermically and exothermically!). When I got to the point where I was out of the range of the simple workbooks and available texts on chemistry, I switched to optics, mostly microscopes, but also a 20x telescope that I used to look at the Moon and identify craters.

Like many young people of my generation, I was fascinated with space and marvelling at the orbital space stations and lunar colonies we were going to have at the beginning of the 21st century. That interest in space and technology led me to explore science fiction and the very real science of computation at an early age. (which was a lot harder to explore for me than it was for my kids!)

My father and I built models and Estes rockets. These activities helped me to appreciate the engineering trade offs that need to be made when converting a concept into reality. It also cemented my appreciation of physics which could tell me that something would work by the numbers, even if nobody else believed it would.

Programming for Profit

While going to high school in Las Vegas I was able to convince our “computer science” teacher Mrs. Christiansen to teach me not only BASIC, but FORTRAN and COBOL as well. That lead to a very interesting summer job programming the Control Data mainframe at UNR via its remote batch station at UNLV. Between that experience and a friend of my father’s who introduced me to the IBM Internship Program where they hired me for a student internship when I was still in high school. There is some evidence they misunderstood where I was in my academic career :-) but after spending the summer working for them they hired me back again the next summer so I must have done something right!

My summers with IBM gave me enough extra money to buy a personal computer kit. I looked long and hard at an IMSAI but ended up getting a kit from The Digital Group which was a company in Denver Colorado. I spent a couple of weeks soldering it together and had to send the CPU card back to Colorado to be brought up, but that system with its 16 line by 64 character TV Typewriter screen made me the youngest member of the Las Vegas Computer Club to actually own a computer. All told it had cost me nearly $1,000 to build and it didn’t even have a case!

When I got to USC I was one of the few students who had their own computer, and because I was so fascinated by computation systems in general I took all of the computer science classes in addition to all of my electrical engineering classes. I also worked at the Image Processing Lab (IPL) and helped digitize some of those pictures you see in various technical papers on image processing. As an operator there I was one of the people would mail out the ‘reference’ tapes containing the standard pictures the lab used.

Working at USC-IPL, which was closely associated with the Engineering Computer Laboratory (ECL) which was closely associated with the Information Sciences Institute (ISI) put me right in the middle of the research around wide area networking that was going on as the ARPANet.

When I graduated and came to Silicon Valley I was disappointed that my company (Intel) was not on the ARPAnet, instead they had a node on an adhoc network called “Usenet” and mostly people used it to read “Network News”. Because I wanted to keep up with my INFO-MICRO email subscription and I could do basic system adminstration, Ken Shoemaker at Intel let me have an account on Intel’s Usenet node, intelca.

Intel was suffering under the chip recession in the mid-80’s and Andy Grove remarked at a “Business Update Meeting” that he considered Intel the largest semi-conductor company in the world because it was losing money less quickly than all the other companies. It was an odd logic to me but it was prophetic.

As things got worse at Intel and I found I enjoyed writing code more than I did designing hardware, I was recruited into a startup called Sun Microsystems. I started at Sun was the Monday after the Friday where they had gone public. My manager to be had hinted strongly that really wanted to start the previous week but I felt my obligation to Intel was such that I had to give them a full two weeks. That experience taught me the different between pre-IPO stock options and post-IPO stock options.

From Sun onward my technical contributions to the companies I worked for had primarily been software, using my hardware knowledge in a more advisory capacity. It wasn’t until I helped to start a company called FreeGate that I was able to work with both the software engineers and the hardware engineers to build systems that balanced cost and performance between the two disciplines.

Entreprenurial Exploits

My entreprenurial experience comes from a variety of companies; Sun which was a classic story of startup to dominant player (a “gorilla” in Geoff Moore’s lexicon), GolfWeb and Freegate which were more traditional dot.com endeavors, and Blekko which was a “Web 2.0” company in the vein of a Google or Facebook (without the tremendous valuation).

I worked at Sun Microsystems from the day after their successful IPO to their first year with a $5B run rate (that took nearly 10 years). At Sun I got to experience a company transition from ‘small’ to ‘enterprise’ and all of the pain in between. My final act at Sun was as a member of the Java team focused on security and cryptography. Sun taught me a lot about how market forces often dominated technical decisions, and how company politics could blind a company to hazards in its path.

After Sun, I was an early employee at an Internet startup called “GolfWeb” which today would be called a ‘blog site’ dedicated to golf. I was the technical lead behind their infrastructure and all of their site interactivity. I learned that you couldn’t differentiate with an Internet technology (Java) if your customers didn’t have it and that as systems, blog sites weren’t all that complicated. The tricky bit at the time was getting them online.

From Golfweb I was part of the founding group of FreeGate which was once again a systems company, creating an Internet Appliance that made connecting to the Internet possible for Small/Medium busineses (SMBs). This company was solving the problem I had discovered at Golfweb (getting SMBs connected to the Internet). Two products shipped, this company merged with Tut Systems in the late '90s. Had the dot.com crash not occured, the earn out would have made me rich. This taught me a lot about counting chickens before they hatch.

And relatively recently I was the VP of Operations and later the combined Engineering and Operations of Blekko which was a search engine that stressed qualtity results over the quantity of the results. While I had been at Google before Blekko and knew generally how search engines worked and made money, Blekko allowed me to dig deeply into that whole equation of operational costs, advertising expectations, and very seamey world of affiliate marketing. Blekko was acquired by IBM for the crawler technology they had developed and I swore off working at any company assoicated with the kind of Advertising Technology Blekko used and others still use.

Last Update December 2021