Winamp Logo
The History of Computing Cover
The History of Computing Profile

The History of Computing

English, Technology, 1 season, 211 episodes, 2 days, 7 hours, 55 minutes
About
Computers touch all most every aspect of our lives today. We take the way they work for granted and the unsung heroes who built the technology, protocols, philosophies, and circuit boards, patched them all together - and sometimes willed amazingness out of nothing. Not in this podcast. Welcome to the History of Computing. Let's get our nerd on!
Episode Artwork

Lotus: From Yoga to Software

Nelumbo nucifera, or the sacred lotus, is a plant that grows in flood plains, rivers, and deltas. Their seeds can remain dormant for years and when floods come along, blossom into a colony of plants and flowers. Some of the oldest seeds can be found in China, where they’re known to represent longevity. No surprise, given their level of nitrition and connection to the waters that irrigated crops by then. They also grow in far away lands, all the way to India and out to Australia. The flower is sacred in Hinduism and Buddhism, and further back in ancient Egypt. Padmasana is a Sanskrit term meaning lotus, or Padma, and Asana, or posture. The Pashupati seal from the Indus Valley civilization shows a diety in what’s widely considered the first documented yoga pose, from around 2,500 BCE. 2,700 years later (give or take a century), the Hindu author and mystic Patanjali wrote a work referred to as the Yoga Sutras. Here he outlined the original asanas, or sitting yoga poses. The Rig Veda, from around 1,500 BCE, is the oldest currently known Vedic text. It is also the first to use the word “yoga”. It describes songs, rituals, and mantras the Brahmans of the day used - as well as the Padma. Further Vedic texts explore how the lotus grew out of Lord Vishnu with Brahma in the center. He created the Universe out of lotus petals. Lakshmi went on to grow out of a lotus from Vishnu as well. It was only natural that humans would attempt to align their own meditation practices with the beautiful meditatios of the lotus. By the 300s, art and coins showed people in the lotus position. It was described in texts that survive from the 8th century. Over the centuries contradictions in texts were clarified in a period known as Classical Yoga, then Tantra and and Hatha Yoga were developed and codified in the Post-Classical Yoga age, and as empires grew and India became a part of the British empire, Yoga began to travel to the west in the late 1800s. By 1893, Swami Vivekananda gave lectures at the Parliament of Religions in Chicago.  More practicioners meant more systems of yoga. Yogendra brought asanas to the United States in 1919, as more Indians migrated to the United States. Babaji’s kriya yoga arrived in Boston in 1920. Then, as we’ve discussed in previous episodes, the United States tightened immigration in the 1920s and people had to go to India to get more training. Theos Bernard’s Hatha Yoga: The Report of a Personal Experience brought some of that knowledge home when he came back in 1947. Indra Devi opened a yoga studio in Hollywood and wrote books for housewives. She brought a whole system, or branch home. Walt and Magana Baptiste opened a studio in San Francisco. Swamis began to come to the US and more schools were opened. Richard Hittleman began to teach yoga in New York and began to teach on television in 1961. He was one of the first to seperate the religious aspect from the health benefits. By 1965, the immigration quotas were removed and a wave of teachers came to the US to teach yoga. The Beatles went to India in 1966 and 1968, and for many Transcendental Meditation took root, which has now grown to over a thousand training centers and over 40,000 teachers. Swamis opened meditation centers, institutes, started magazines, and even magazines. Yoga became so big that Rupert Holmes even poked fun of it in his song “Escape (The Piña Colada Song)” in 1979. Yoga had become part of the counter-culture, and the generation that followed represented a backlash of sorts. A common theme of the rise of personal computers is that the early pioneers were a part of that counter-culture. Mitch Kapor graduated high school in 1967, just in time to be one of the best examples of that. Kapor built his own calculator in as a kid before going to camp to get his first exposure to programming on a Bendix. His high school got one of the 1620 IBM minicomputers and he got the bug. He went off to Yale at 16 and learned to program in APL and then found Computer Lib by Ted Nelson and learned BASIC. Then he discovered the Apple II.  Kapor did some programming for $5 per hour as a consultant, started the first east coast Apple User Group, and did some work around town. There are generations of people who did and do this kind of consulting, although now the rates are far higher. He met a grad student through the user group named Eric Rosenfeld who was working on his dissertation and needed some help programming, so Kapor wrote a little tool that took the idea of statistical analysis from the Time Shared Reactive Online Library, or TROLL, and ported it to the microcomputer, which he called Tiny Troll.  Then he enrolled in the MBA program at MIT. He got a chance to see VisiCalc and meet Bob Frankston and Dan Bricklin, who introduced him to the team at Personal Software. Personal Software was founded by Dan Fylstra and Peter Jennings when they published Microchips for the KIM-1 computer. That led to ports for the 1977 Trinity of the Commodore PET, Apple II, and TRS-80 and by then they had taken Bricklin and Franston’s VisiCalc to market. VisiCalc was the killer app for those early PCs and helped make the Apple II successful. Personal Software brought Kapor on, as well as Bill Coleman of BEA Systems and Electronic Arts cofounder Rich Mellon. Today, software developers get around 70 percent royalties to publish software on app stores but at the time, fees were closer to 8 percent, a model pulled from book royalties. Much of the rest went to production of the box and disks, the sales and marketing, and support. Kapor was to write a product that could work with VisiCalc. By then Rosenfeld was off to the world of corporate finance so Kapor moved to Silicon Valley, learned how to run a startup, moved back east in 1979, and released VisiPlot and VisiTrend in 1981. He made over half a million dollars in the first six months in royalties.  By then, he bought out Rosenfeld’s shares in what he was doing, hired Jonathan Sachs, who had been at MIT earlier, where he wrote the STOIC programming language, and then went to work at Data General. Sachs worked on spreadsheet ideas at Data General with a manager there, John Henderson, but after they left Data General, and the partnership fell apart, he worked with Kapor instead. They knew that for software to be fast, it needed to be written in a lower level language, so they picked the Intel 8088 assembly language given that C wasn’t fast enough yet. The IBM PC came in 1981 and everything changed. Mitch Kapor and Jonathan Sachs started Lotus in 1982. Sachs got to work on what would become Lotus 1-2-3. Kapor turned out to be a great marketer and product manager. He listened to what customers said in focus groups. He pushed to make things simpler and use less jargon. They released a new spreadsheet tool in 1983 and it worked flawlessly on the IBM PC and while Microsoft had Multiplan and VisCalc was the incumbent spreadsheet program, Lotus quickly took market share from then and SuperCalc. Conceptually it looked similar to VisiCalc. They used the letter A for the first column, B for the second, etc. That has now become a standard in spreadsheets. They used the number 1 for the first row, the number 2 for the second. That too is now a standard. They added a split screen, also now a standard. They added macros, with branching if-then logic. They added different video modes, which could give color and bitmapping. They added an underlined letter so users could pull up a menu and quickly select the item they wanted once they had those orders memorized, now a standard in most menuing systems. They added the ability to add bar charts, pie charts, and line charts. One could even spread their sheet across multiple monitors like in a magazine. They refined how fields are calculated and took advantage of the larger amounts of memory to make Lotus far faster than anything else on the market. They went to Comdex towards the end of the year and introduced Lotus 1-2-3 to the world. The software could be used as a spreadsheet, but the 2 and 3 referred to graphics and database management. They did $900,000 in orders there before they went home. They couldn’t even keep up with the duplication of disks. Comdex was still invitation only. It became so popular that it was used to test for IBM compatibility by clone makers and where VisiCalc became the app that helped propel the Apple II to success, Lotus 1-2-3 became the app that helped propel the IBM PC to success. Lotus was rewarded with $53 million in sales for 1983 and $156 million in 1984. Mitch Kapor found himself. They quickly scaled from less than 20 to 750 employees. They brought in Freada Klein who got her PhD to be the Head of Employee Relations and charged her with making them the most progressive employer around. After her success at Lotus, she left to start her own company and later married. Sachs left the company in 1985 and moved on to focus solely on graphics software. He still responds to requests on the phpBB forum at dl-c.com. They ran TV commercials. They released a suite of Mac apps they called Lotus Jazz. More television commercials. Jazz didn’t go anywhere and only sold 20,000 copies. Meanwhile, Microsoft released Excel for the Mac, which sold ten times as many. Some blamed the lack os sales on the stringent copy protection. Others blamed the lack of memory to do cool stuff. Others blamed the high price. It was the first major setback for the young company.  After a meteoric rise, Kapor left the company in 1986, at about the height of their success. He  replaced himself with Jim Manzi. Manzi pushed the company into network applications. These would become the center of the market but were just catching on and didn’t prove to be a profitable venture just yet. A defensive posture rather than expanding into an adjacent market would have made sense, at least if anyone knew how aggressive Microsoft was about to get it would have.  Manzi was far more concerned about the millions of illegal copies of the software in the market than innovation though. As we turned the page to the 1990s, Lotus had moved to a product built in C and introduced the ability to use graphical components in the software but not wouldn’t be ported to the new Windows operating system until 1991 for Windows 3. By then there were plenty of competitors, including Quattro Pro and while Microsoft Excel began on the Mac, it had been a showcase of cool new features a windowing operating system could provide an application since released for Windows in 1987. Especially what they called 3d charts and tabbed spreadsheets. There was no catching up to Microsoft by then and sales steadily declined. By then, Lotus released Lotus Agenda, an information manager that could be used for time management, project management, and as a database. Kapor was a great product manager so it stands to reason he would build a great product to manage products. Agenda never found commercial success though, so was later open sourced under a GPL license. Bill Gross wrote Magellan there before he left to found GoTo.com, which was renamed to Overture and pioneered the idea of paid search advertising, which was acquired by Yahoo!. Magellan cataloged the internal drive and so became a search engine for that. It sold half a million copies and should have been profitable but was cancelled in 1990. They also released a word processor called Manuscript in 1986, which never gained traction and that was cancelled in 1989, just when a suite of office automation apps needed to be more cohesive.  Ray Ozzie had been hired at Software Arts to work on VisiCalc and then helped Lotus get Symphony out the door. Symphony shipped in 1984 and expanded from a spreadsheet to add on text with the DOC word processor, and charts with the GRAPH graphics program, FORM for a table management solution, and COM for communications. Ozzie dutifully shipped what he was hired to work on but had a deal that he could build a company when they were done that would design software that Lotus would then sell. A match made in heaven as Ozzie worked on PLATO and borrowed the ideas of PLATO Notes, a collaboration tool developed at the University of Illinois Champagne-Urbana  to build what he called Lotus Notes.  PLATO was more more than productivity. It was a community that spanned decades and Control Data Corporation had failed to take it to the mass corporate market. Ozzie took the best parts for a company and built it in isolation from the rest of Lotus. They finally released it as Lotus Notes in 1989. It was a huge success and Lotus bought Iris in 1994. Yet they never found commercial success with other socket-based client server programs and IBM acquired Lotus in 1995. That product is now known as Domino, the name of the Notes 4 server, released in 1996. Ozzie went on to build a company called Groove Networks, which was acquired by Microsoft, who appointed him one of their Chief Technology Officers. When Bill Gates left Microsoft, Ozzie took the position of Chief Software Architect he vacated. He and Dave Cutler went on to work on a project called Red Dog, which evolved into what we now know as Microsoft Azure.  Few would have guessed that Ozzie and Kapor’s handshake agreement on Notes could have become a real product. Not only could people not understand the concept of collaboration and productivity on a network in the late 1980s but the type of deal hadn’t been done. But Kapor by then realized that larger companies had a hard time shipping net-new software properly. Sometimes those projects are best done in isolation. And all the better if the parties involved are financially motivated with shares like Kapor wanted in Personal Software in the 1970s before he wrote Lotus 1-2-3. VisiCalc had sold about a million copies but that would cease production the same year Excel was released. Lotus hung on longer than most who competed with Microsoft on any beachhead they blitzkrieged. Microsoft released Exchange Server in 1996 and Notes had a few good years before Exchange moved in to become the standard in that market. Excel began on the Mac but took the market from Lotus eventually, after Charles Simonyi stepped in to help make the product great.  Along the way, the Lotus ecosystem created other companies, just as they were born in the Visi ecosystem. Symantec became what we now call a “portfolio” company in 1985 when they introduced NoteIt, a natural language processing tool used to annotate docs in Lotus 1-2-3. But Bill Gates mentioned Lotus by name multiple times as a competitor in his Internet Tidal Wave memo in 1995. He mentioned specific features, like how they could do secure internet browsing and that they had a web publisher tool - Microsoft’s own FrontPage was released in 1995 as well. He mentioned an internet directory project with Novell and AT&T. Active Directory was released a few years later in 1999, after Jim Allchin had come in to help shepherd LAN Manager. Notes itself survived into the modern era, but by 2004 Blackberry released their Exchange connector before they released the Lotus Domino connector. That’s never a good sign. Some of the history of Lotus is covered in Scott Rosenberg’s 2008 book, Dreaming in Code. Others are documented here and there in other places. Still others are lost to time. Kapor went on to invest in UUNET, which became a huge early internet service provider. He invested in Real Networks, who launched the first streaming media service on the Internet. He invested in the creators of Second Life. He never seemed vindictive with Microsoft but after AOL acquired Netscape and Microsoft won the first browser war, he became the founding chair of the Mozilla Foundation and so helped bring Firefox to market. By 2006, Firefox took 10 percent of the market and went on to be a dominant force in browsers. Kapor has also sat on boards and acted as an angel investor for startups ever since leaving the company he founded. He also flew to Wyoming in 1990 after he read a post on The WELL from John Perry Barlow. Barlow was one of the great thinkers of the early Internet. They worked with Sun Microsystems and GNU Debugging Cypherpunk John Gilmore to found the Electronic Frontier Foundation, or EFF. The EFF has since been the nonprofit who leads the fight for “digital privacy, free speech, and innovation.” So not everything is about business.    
6/27/202324 minutes, 22 seconds
Episode Artwork

Section 230 and the Concept of Internet Exceptionalism

We covered computer and internet copyright law in a previous episode. That type of law began with interpretations that tried to take the technology out of cases so they could be interpreted as though what was being protected was a printed work, or at least it did for a time. But when it came to the internet, laws, case law, and their knock-on effects, the body of jurisprudence work began to diverge.  Safe Harbor mostly refers to the Online Copyright Infringement Liability Limitation Act, or OCILLA for short, was a law passed in the late 1980s that  shields online portals and internet service providers from copyright infringement. Copyright infringement is one form of immunity, but more was needed. Section 230 was another law that protects those same organizations from being sued for 3rd party content uploaded on their sites. That’s the law Trump wanted overturned during his final year in office but given that the EU has Directive 2000/31/EC, Australia has the Defamation Act of 2005, Italy has the Electronic Commerce Directive 2000, and lots of other countries like England and Germany have had courts find similarly, it is now part of being an Internet company. Although the future of “big tech” cases (and the damage many claim is being done to democracy) may find it refined or limited. That’s because the concept of Internet Exceptionalism itself is being reconsidered now that the internet is here to stay. Internet Exceptionalism is a term that notes that laws that diverge from precedents for other forms of media distribution. For example, a newspaper can be sued for liable or defamation, but a website is mostly shielded from such suits because the internet is different. Pages are available instantly, changes be made instantly, and the reach is far greater than ever before. The internet has arguably become the greatest tool to spread democracy and yet potentially one of its biggest threats. Which some might have argued about newspapers, magazines, and other forms of print media in centuries past. The very idea of Internet Exceptionalism has eclipsed the original intent. Chris Cox and Ron Widen initially intended to help fledgling Internet Service Providers (ISPs) jumpstart content on the internet. The internet had been privatized in 1995 and companies like CompuServe, AOL, and Prodigy were already under fire for the content on their closed networks. Cubby v CompuServe in 1991 had found that online providers weren’t considered publishers of content and couldn’t be held liable for free speech practiced on their platforms in part because they did not exercise editorial control of that content. Stratton Oakmont v Prodigy found that Prodigy did have editorial control (and in fact advertised themselves as having a better service because of it) and so could be found liable like a newspaper would. Cox and Widen were one of the few conservative and liberal pairs of lawmakers who could get along in the decisive era when Newt Gingrich came to power and tried to block everything Bill Clinton tried to do.  Yet there were aspects of the United States that were changing outside of politics. Congress spent years negotiating a telecommunications overhaul bill that came to be known as The Telecommunications Act of 1996. New technology led to new options. Some saw content they found to be indecent and so the Communications Decency Act (or Title V of the Telecommunications Act) was passed in 1996, but in Reno v ACLU found to be a violation of the first amendment, and struck down by the Supreme Court in 1997. Section 230 of that act was specifically about the preservation of free speech and so severed from the act and stood alone. It would be adjudicated time and time and eventually became an impenetrable shield that protects online providers from the need to scan every message posted to a service to see if it would get them sued. Keep in mind that society itself was changing quickly in the early 1990s. Tipper Gore wanted to slap a label on music to warn parents that it had explicit lyrics. The “Satanic Panic” as it’s called by history reused tropes such as cannibalism and child murder to give the moral majority an excuse to try to restrict that which they did not understand. Conservative and progressive politics have always been a 2 steps forward and 1 step back truce. Heavy metal would seem like nothin’ once parents heard the lyrics of gagster rap.  But Section 230 continued on. It stated that “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” It only took 27 words to change the world. They said that the people that host the content can’t be sued for the content because, as courts interpreted it, it’s free speech. Think of a public forum like a hall on a college campus that might restrict one group from speaking and so suppress speech or censer a group. Now, Section 230 didn’t say it wasn’t allowed to screen material but instead shielded providers from being held liable for that material. The authors of the bill felt that if providers would be held liable for any editing that they wouldn’t do any. Now providers could edit some without reviewing every post. And keep in mind the volume of posts in message boards and of new websites had already become too much in the late 1990s to be manually monitored. Further, as those companies became bigger business they became more attractive to law suits.  Section 230 had some specific exclusions. Any criminal law could still be applied, as could state, sex trafficking, and privacy laws. Intellectual property laws also remained untouched, thus OCILLA. To be clear, reading the law, the authors sought to promote the growth of the internet - and it worked. Yelp gets sued over revues but cases are dismissed. Twitter can get sued over a Tweet when someone doesn’t like what is said, but it’s the poster and not Twitter who is liable. Parody sites, whistleblower sites, watchdog sites, revue sites, blogs, and an entire industry was born, which each player of what would later be known as the Web 2.0 market could self-regulate themselves.  Those businesses grew far beyond the message boards of the 1990s. This was also a time when machine learning became more useful. A site like Facebook could show a feed of posts not in reverse chronological order, but instead by “relevance.” Google could sell ads and show them based on the relevance of a search term. Google could buy YouTube and they could have ads on videos. Case after case poked at the edges of what could be used to hold a site liable. The fact that the courts saw a post on Reddit as free speech, no matter how deplorable the comments, provided a broad immunity to liability that was, well, exceptional in a way.  Some countries could fine or imprison people if they posted something negative about the royal family or party in charge. Some of those countries saw the freedom of speech so important as a weapon that could be used against the US in a way. The US became a safe haven in a way to free speech and many parts of the internet were anonymous. In this way (as was previously done with films and other sources of entertainment and news) the US began to export the culture of free speech. But every country also takes imports. Some of those were real, true ideas homegrown or brought in from abroad. Early posters of message boards maybe thought the Armenian Genocide was a hoax - or the Holocaust. A single post could ruin a career. Craigslist allowed for sex trafficking and while they eventually removed that, sites like Backpage have received immunity. So even some of the exceptions are, um, not. Further, extremist groups use pages to spread propaganda and even recruit soldiers to spread terror.  The courts found that sites were immune to suits over fake profiles on dating sites - even if it was a famous person and the person was getting threatening calls. The courts initially found sites needed to take down content if they were informed it was libelous - but have received broad immunity even when they don’t due to the sheer amount of content. Batzel v Smith saw a lawyers firm ruined over false reports she was the granddaughter of Nazi Heinrich Himmler and the beneficiary of Nazi art theft, even though she wasn’t - she too lost her case. Sites provide neutral tools and so are shielded from defamation - even if they’re neutralish you rarely see them held to account. In Goddard v. Google, the Google Keyword Tool recommended that advertisers include the word “free” in mobile content, which Goddard claimed led to fraudulent subscription service recruitment. This was machine learning-based recommendations. The court again found provided the Keyword Tool was neutral that advertisers could adopt or reject the recommendation.  Still, time and time again the idea of safe harbor for internet companies and whether internet exceptionalism should continue comes up. The internet gave a voice to the oppressed, but also to the oppressors. That’s neutrality in a way, except that the oppressors (especially when state sponsored actors are involved) often have more resources to drown out other voices, just like in real life. Some have argued a platform like Facebook should be held accountable for their part in the Capitol riots, which is to say as a place where people practiced free speech. Others look to Backpage as facilitating the exploitation of children or as a means of oppression. Others still see terrorist networks as existing and growing because of the ability to recruit online.  The Supreme Court is set to hear docket number 21-1333 in 2022. Gonzalez v. Google was brought by Reynaldo Gonzalez, and looks at whether 230 can immunize Google even though they have made targeted recommendations - in this case when ISIS used YouTube vides to recruit new members - through the  recommendation algorithm. An algorithm that would be neutral. But does a platform as powerful have a duty to do more, especially when there’s a chance that Section 230 bumps up against anti-terrorism legislation. Again and again the district courts in the United States have found section 230 provides broad immunization to online content providers. Now, the Supreme Court will weigh in. After that, billions of dollars may have to be pumped into better content filtration or they may continue to apply broad first amendment guidance.  The Supreme Court is packed with “originalists”. They still have phones, which the framers did not. The duty that common law places on those who can disseminate negligent or reckless content has lost the requirement for reasonable care due to the liability protections afforded purveyors of content by Section 230. This has given rise to hate speech and misinformation. John Perry Barlow’s infamous A Declaration of the Independence of Cyberspace in protest of the CDA was supported by Section 230 of that same law. But the removal of the idea and duty of reasonable care and the exemptions have now removed any accountability from what seems like any speech. Out of the ashes of accountability the very concept of free speech and where the duty of reasonable care lies may be reborn. We now have the ability to monitor via machine learning, we’ve now redefined what it means to moderate, and there’s now a robust competition for eyeballs on the internet. We’ve also seen how a lack of reasonable standards can lead to real life consequences and that an independent cyberspace can bleed through into the real world.  If the Supreme Court simply upholds findings from the past then the movement towards internet sovereignty may accelerate or may stay the same. Look to where venture capital flows for clues as to how the First Amendment will crash into the free market, and see if its salty waters leave data and content aggregators with valuations far lower than where they once were. The asset of content may some day become a liability with injuries that could provide an existential threat to the owner. The characters may walk the astral plane but eventually must return to the prime material plane along their tether to take a long rest or face dire consequences. The world simply can’t continue to become more and more toxic - and yet there’s a reason the First Amendment is, well, first. Check out Twenty-Six Words Created the Internet. What Will It Take to Save It?
6/5/202319 minutes, 9 seconds
Episode Artwork

Bluetooth: From Kings to Personal Area Networks

Bluetooth The King Ragnar Lodbrok was a legendary Norse king, conquering parts of Denmark and Sweden. And if we’re to believe the songs, he led some of the best raids against the Franks and the the loose patchwork of nations Charlemagne put together called the Holy Roman Empire.  We use the term legendary as the stories of Ragnar were passed down orally and don’t necessarily reconcile with other written events. In other words, it’s likely that the man in the songs sung by the bards of old are likely in fact a composite of deeds from many a different hero of the norse.   Ragnar supposedly died in a pit of snakes at the hands of the Northumbrian king and his six sons formed a Great Heathen Army to avenge their father. His sons ravaged modern England int he wake of their fathers death before becoming leaders of various lands they either inherited or conquered. One of those sons, Sigurd Snake-in-the-Eye, returned home to rule his lands and had children, including Harthacnut. He in turn had a son named Gorm.  Gorm the Old was a Danish king who lived to be nearly 60 in a time when life expectancy for most was about half that. Gorm raised a Jelling stone in honor of his wife Thyra. As did his son, in the honor of his wife. That stone is carved with runes that say: “King Haraldr ordered this monument made in memory of Gormr, his father, and in memory of Thyrvé, his mother; that Haraldr who won for himself all of Denmark and Norway and made the Danes Christian.” That stone was erected by a Danish king named Herald Gormsson. He converted to Christianity as part of a treaty with the Holy Roman Emperor of the day. He united the tribes of Denmark into a kingdom. One that would go on to expand the reach and reign of the line. Just as Bluetooth would unite devices. Even the logo is a combination of runes that make up his initials HB. Once united, their descendants would go on to rule Denmark, Norway, and England. For a time. Just as Bluetooth would go on to be an important wireless protocol. For a time.  Personal Area Networks Many early devices shipped with infrared so people could use a mouse or keyboard. But those never seemed to work so great. And computers with a mouse and keyboard and drawing pad and camera and Zip drive and everything else meant that not only did devices have to be connected to sync but they also had to pull a lot of power and create an even bigger mess on our desks.  What the world needed instead was an inexpensive chip that could communicate wirelessly and not pull a massive amount of power since some would be in constant communication. And if we needed a power cord then might as well just use USB or those RS-232 interfaces (serial ports) that were initially developed in 1960 - that were slow and cumbersome. And we could call this a Personal Area Network, or PAN.  The Palm Pilot was popular, but docking and pluging in that serial port was not exactly optimal. Yet every ATX motherboard had a port or two. So a Bluetooth Special Interest Group was formed to conceive and manage the standard in 1988 and while initially had half a dozen companies now has over 30,000. The initial development started in the late 1990s with Ericcson. It would use short-range UHF radio waves in the 2.402 GHz and 2.48 GHz bands to exchange data with computers and cell phones, which were evolving into mobile devices at the time. The technology was initially showcased at COMDEX in 1999. Within a couple of years there were phones that could sync, kits for cars, headsets, and chips that could be put into devices - or cards or USB adapters, to get a device to sync 721 Kbps. We could add 2 to 8 Bluetooth secondary devices that paired to our primary. They then frequency hopped using their Bluetooth device address provided by the primary, which sends a radio signal to secondaries with a range of addresses to use. The secondaries then respond with the frequency and clock state. And unlike a lot of other wireless technologies, it just kinda’ worked. And life seemed good. Bluetooth went to the IEEE, which had assigned networking the 802 standard with Ethernet being 802.3 and Wi-Fi being 802.11. So Personal Area Networks became 802.15, with Bluetooth 1.1 becoming 802.15.1. And the first phone shipped in 2001, the Sony Ericsson T39.  Bluetooth 2 came in 2005 and gave us 2.1 Mbps speeds and increased the range from 10 to 30 meters. By then, over 5 million devices were shipping every week. More devices mean we have a larger attack surface space. And security researchers were certainly knocking at the door. Bluetooth 2.1 added secure simple pairing. Then Bluetooth 3 in 2009 bringing those speeds up to 24 Mbps and once connected allowing Wi-Fi to pick up connections once established. But we were trading speed for energy and this wasn’t really the direction Bluetooth needed to go. Even if a billion devices had shipped by the end of 2006. Bluetooth 4 The mobility era was upon us and it was increasingly important, not just for the ARM chips, but also for the rest of the increasing number of devices, to use less power. Bluetooth 4 came along in 2010 and was slower at 1 Mbps, but used less energy. This is when the iPhone 4S line fully embraced the technology, helping make it a standard.  While not directly responsible for the fitness tracker craze, it certainly paved the way for a small coin cell battery to run these types of devices for long periods of time. And it allowed for connecting devices 100 meters, or well over 300 feet away. So leave the laptop in one room and those headphones should be fine in the next.  And while we’re at it, maybe we want those headphones to work on two different devices. This is where Multipoint comes into play. That’s the feature of Bluetooth 4 that allows those devices to pass seamlessly between the phone and the laptop, maintaining a connection to each. Apple calls their implementation of this feature Handoff.  Bluetooth 5 came in 2016, allowing for connections up to 240 meters, or around 800 feet. Well, according to what’s between us and our devices, as with other protocols. We also got up to 2 Mbps, which dropped as we moved further away from devices. Thus we might get buffering issues or slower transfers with weaker connections. But not outright dropping the connection. Bluetooth Evolves Bluetooth was in large part developed to allow our phones to sync to our computers. Most don’t do that any more. And the developers wanted to pave the way for wireless headsets. But it also allowed us to get smart scales, smart bulbs, wearables like smart watches and glasses, Bluetooth printers, webcams, keyboards, mice, GPS devices, thermostats, and even a little device that tells me when I need to water the plants. Many home automation devices, or IoT as we seem to call them these days began as Bluetooth but given that we want them to work when we take all our mostly mobile computing devices out of the home, many of those have moved over to Wi-Fi these days. Bluetooth was initially conceived as a replacement for the serial port. Higher throughput needs moved to USB and USB-C. Lower throughput has largely moved to Bluetooth, with the protocol split between Low Energy and higher bandwidth application which with high definition audio now includes headphones. Once the higher throughput needs went to parallel and SCSI but now there are so many other options.  And the line is blurred between what goes where. Billions of routers and switches have been sold, billions of wireless access points. Systems on a Chip now include Wi-Fi and Bluetooth together on the same chip. The programming languages for native apps have also given us frameworks and APIs where we can establish a connection over 5G, Wi-Fi, or Bluetooth, and then hand them off where the needs diverge. Seamless to those who use our software and elegant when done right. Today over four billion bluetooth devices ship per year, growing at about 10 percent a year. The original needs that various aspects of Bluetooth was designed for have moved to other protocols and the future of the Personal Area Network may be at least in part moved to Wi-Fi or 5G. But for now it’s a standard that has aged well and continues to make life easier for those who use it.
5/17/202313 minutes, 10 seconds
Episode Artwork

One History Of 3D Printing

One of the hardest parts of telling any history, is which innovations are significant enough to warrant mention. Too much, and the history is so vast that it can't be told. Too few, and it's incomplete. Arguably, no history is ever complete. Yet there's a critical path of innovation to get where we are today, and hundreds of smaller innovations that get missed along the way, or are out of scope for this exact story. Children have probably been placing sand into buckets to make sandcastles since the beginning of time. Bricks have survived from round 7500BC in modern-day Turkey where humans made molds to allow clay to dry and bake in the sun until it formed bricks. Bricks that could be stacked. And it wasn’t long before molds were used for more. Now we can just print a mold on a 3d printer.   A mold is simply a block with a hollow cavity that allows putting some material in there. People then allow it to set and pull out a shape. Humanity has known how to do this for more than 6,000 years, initially with lost wax casting with statues surviving from the Indus Valley Civilization, stretching between parts of modern day Pakistan and India. That evolved to allow casting in gold and silver and copper and then flourished in the Bronze Age when stone molds were used to cast axes around 3,000 BCE. The Egyptians used plaster to cast molds of the heads of rulers. So molds and then casting were known throughout the time of the earliest written works and so the beginning of civilization. The next few thousand years saw humanity learn to pack more into those molds, to replace objects from nature with those we made synthetically, and ultimately molding and casting did its part on the path to industrialization. As we came out of the industrial revolution, the impact of all these technologies gave us more and more options both in terms of free time as humans to think as well as new modes of thinking. And so in 1868 John Wesley Hyatt invented injection molding, patenting the machine in 1872. And we were able to mass produce not just with metal and glass and clay but with synthetics. And more options came but that whole idea of a mold to avoid manual carving and be able to produce replicas stretched back far into the history of humanity. So here we are on the precipice of yet another world-changing technology becoming ubiquitous. And yet not. 3d printing still feels like a hobbyists journey rather than a mature technology like we see in science fiction shows like Star Trek with their replicators or printing a gun in the Netflix show Lost In Space. In fact the initial idea of 3d printing came from a story called Things Pass By written all the way back in 1945! I have a love-hate relationship with 3D printing. Some jobs just work out great. Others feel very much like personal computers in the hobbyist era - just hacking away until things work. It’s usually my fault when things go awry. Just as it was when I wanted to print things out on the dot matrix printer on the Apple II. Maybe I fed the paper crooked or didn’t check that there was ink first or sent the print job using the wrong driver. One of the many things that could go wrong.  But those fast prints don’t match with the reality of leveling and cleaning nozzles and waiting for them to heat up and pulling filament out of weird places (how did it get there, exactly)! Or printing 10 add-ons for a printer to make it work the way it probably should have out of the box.  Another area where 3d printing is similar to the early days of the personal computer revolution is that there are a few different types of technology in use today. These include color-jet printing (CJP), direct metal printing (DMP), fused deposition modeling (FDM), Laser Additive Manufacturing (LAM, multi-jet printing (MJP), stereolithography (SLA), selective laser melting (SLM), and selective laser sintering (SLS). Each could be better for a given type of print job to be done. Some forms have flourished while others are either their infancy or have been abandoned like extinct languages. Language isolates are languages that don’t fit into other families. Many are the last in a branch of a larger language family tree. Others come out of geographically isolated groups. Technology also has isolates. Konrad Zuse built computers in pre-World War II Germany and after that aren’t considered to influence other computers. In other words, every technology seems to have a couple of false starts. Hideo Kodama filed the first patent to 3d print in 1980 - but his method of using UV lights to harden material doesn’t get commercialized.  Another type of 3d printing includes printers that were inkjets that shot metal alloys onto surfaces. Inkjet printing was invented by Ichiro Endo at Canon in the 1950s, supposedly when he left a hot iron on a pen and ink bubbled out. Thus the “Bubble jet” printer. And Jon Vaught at HP was working on the same idea at about the same time. These were patented and used to print images from computers over the coming decades. Johannes Gottwald patented a printer like this in 1971. Experiments continued through the 1970s when companies like Exxon were trying to improve various prototyping processes. Some of their engineers joined an inventor Robert Howard in the early 1980s to found a company called Howtek and they produced the Pixelmaster, using hot-melt inks to increment the ink jet with solid inks, which then went on to be used by Sanders Prototype, which evolved into a company called Solidscape to market the Modelmaker. And some have been used to print solar cells, living cells, tissue, and even edible birthday cakes. That same technique is available with a number of different solutions but isn’t the most widely marketable amongst the types of 3D printers available. SLA There’s often a root from which most technology of the day is derived. Charles, or Chuck, Hull coined the term stereolithography, where he could lay down small layers of an object and then cure the object with UV light, much as the dentists do with fillings today. This is made possibly by photopolymers, or plastics that are easily cured by an ultraviolet light. He then invented the stereolithography apparatus, or SLA for short, a machine that printed from the bottom to the top by focusing a laser on photopolymer while in a liquid form to cure the plastic into place. He worked on it in 1983, filed the patent in 1984, and was granted the patent in 1986.  Hull also developed a file format for 3D printing called STL. STL files describe the surface of a three-dimensional object, geometrically using Cartesian coordinates. Describing coordinates and vectors means we can make objects bigger or smaller when we’re ready to print them. 3D printers print using layers, or slices. Those can change based on the filament on the head of a modern printer, the size of the liquid being cured, and even the heat of a nozzle. So the STL file gets put into a slicer that then converts the coordinates on the outside to the polygons that are cured. These are polygons in layers, so they may appear striated rather than perfectly curved according to the size of the layers. However, more layers take more time and energy. Such is the evolution of 3D printing. Hull then founded a company called 3D Systems in Valencia California to take his innovation to market. They sold their first printer, the SLA-1 in 1988. New technologies start out big and expensive. And that was the case with 3D Systems. They initially sold to large engineering companies but when solid-state lasers came along in 1996 they were able to provide better systems for cheaper.  Languages also have other branches. Another branch in 3d printing came in 1987, just before the first SLA-1 was sold.  Carl Deckard  and his academic adviser Joe Beaman at the University of Texas worked on a DARPA grant to experiment with creating physical objects with lasers. They formed a company to take their solution to market called DTM and filed a patent for what they called selective laser sintering. This compacts and hardens a material with a heat source without having to liquify it. So a laser, guided by a computer, can move around a material and harden areas to produce a 3D model. Now in addition to SLA we had a second option, with the release of the Sinterstation 2500plus. Then 3D Systems then acquired DTM for $45 million in 2001. FDM After Hull published his findings for SLA and created the STL format, other standards we use today emerged. FDM is short for Fused Deposition Modeling and was created by Scott Crump in 1989. He then started a company with his wife Lisa to take the product to market, taking the company public in 1994. Crump’s first patent expired in 2009.  In addition to FDM, there are other formats and techniques. AeroMat made the first 3D printer that could produce metal in 1997. These use a laser additive manufacturing process, where lasers fuse powdered titanium alloys. Some go the opposite direction and create out of bacteria or tissue. That began in 1999, when Wake Forest Institute of Regenerative medicine grew a 3D printed urinary bladder in a lab to be used as a transplant. We now call this bioprinting and can take tissue and lasers to rebuild damaged organs or even create a new organ. Organs are still in their infancy with success trials on smaller animals like rabbits. Another aspect is printing dinner using cell fibers from cows or other animals. There are a number of types of materials used in 3D printing. Most printers today use a continuous feed of one of these filaments, or small coiled fibers of thermoplastics that melt instead of burn when they’re heated up. The most common in use today is PLA, or polylactic acid, is a plastic initially created by Wall Carothers of DuPont, the same person that brought us nylon, neoprene, and other plastic derivatives. It typically melts between 200 and 260 degrees Celsius. Printers can also take ABS filament, which is short for acrylonitrile-butadien-styerene. Other filament types include HIPS, PET, CPE, PVA, and their derivative forms.  Filament is fed into a heated extruder assembly that melts the plastic. Once melted, filament extrudes into place through a nozzle as a motor sends the nozzle on a x and y axis per layer.  Once a layer of plastic is finished being delivered to the areas required to make up the desired slice, the motor moves the extruder assembly up or down on a z axis between layers. Filament is just between 1.75 millimeters and 3 millimeters and comes in spools between half a kilogram and two kilograms. These thermoplastics cool very quickly. Once all of the slices are squirted into place, the print is removed from the bed and the nozzle cools off. Filament comes in a number of colors and styles. For example, wood fibers can be added to filament to get a wood-grained finish. Metal can be added to make prints appear metallic and be part metal.  Printing isn’t foolproof, though. Filament often gets jammed or the spool gets stuck, usually when something goes wrong. Filament also needs to be stored in a temperature and moisture controlled location or it can cause jobs to fail. Sometimes the software used to slice the .stl file has an incorrect setting, like the wrong size of filament. But in general, 3D printing using the FDM format is pretty straight forward these days. Yet this is technology that should have moved faster in terms of adoption. The past 10 years have seen more progress than the previous ten though. Primarily due to the maker community. Enter the Makers The FDM patent expired in 2009. In 2005, a few years before the FDM patent expired, Dr. Adrian Bowyer started a project to bring inexpensive 3D printers to labs and homes around the world. That project evolved into what we now call the Replicating Rapid Prototyper, or RepRap for short.  RepRap evolved into an open source concept to create self-replicating 3D printers and by 2008, the Darwin printer was the first printer to use RepRap. As a community started to form, more collaborators designed more parts. Some were custom parts to improve the performance of the printer, or replicate the printer to become other printers. Others held the computing mechanisms in place. Some even wrote code to make the printer able to boot off a MicroSD card and then added a network interface so files could be uploaded to the printer wirelessly. There was a rising tide of printers. People were reading about what 3D printers were doing and wanted to get involved. There was also a movement in the maker space, so people wanted to make things themselves. There was a craft to it. Part of that was wanting to share. Whether that was at a maker space or share ideas and plans and code online. Like the RepRap team had done.  One of those maker spaces was NYC Resistor, founded in 2007. Bre Pettis, Adam Mayer, and Zach Smith from there took some of the work from the RepRap project and had ideas for a few new projects they’d like to start. The first was a site that Zach Smith created called Thingiverse. Bre Pettis joined in and they allowed users to upload .stl files and trade them. It’s now the largest site for trading hundreds of thousands of designs to print about anything imaginable. Well, everything except guns. Then comes 2009. The patent for FDM expires and a number of companies respond by launching printers and services. Almost overnight the price for a 3D printer fell from $10,000 to $1,000 and continued to drop. Shapeways had created a company the year before to take files and print them for people. Pettis, Mayer, and Smith from NYC Resistor also founded a company called MakerBot Industries. They’d already made a little bit of a name for themselves with the Thingiverse site. They knew the mind of a maker. And so they decided to make a kit to sell to people that wanted to build their own printers. They sold 3,500 kits in the first couple of years. They had a good brand and knew the people who bought these kinds of devices. So they took venture funding to grow the company. So they raised $10M in funding in 2011 in a round led by the Foundry Group, along with Bezos, RRE, 500 Startups and a few others. They hired and grew fast. Smith left in 2012 and they were getting closer and closer with Stratasys, who if we remember were the original creators of FDM. So Stratasys ended up buying out the company in 2013 for $403M. Sales were disappointing so there was a changeup in leadership, with Pettis leaving and they’ve become much more about additive manufacturing than a company built to appeal to makers. And yet the opportunity to own that market is still there. This was also an era of Kickstarter campaigns. Plenty of 3D printing companies launched through kickstarter including some to take PLA (a biodegradable filament) and ABS materials to the next level. The ExtrusionBot, the MagicBox, the ProtoPlant, the Protopasta, Mixture, Plybot, Robo3D, Mantis, and so many more.  Meanwhile, 3D printing was in the news. 2011 saw the University of Southhampton design a 3d printed aircraft. Ecologic printing cars, and practically every other car company following suit that they were fabricating prototypes with 3d printers, even full cars that ran. Some on their own, some accidentally when parts are published in .stl files online violating various patents.  Ultimaker was another RepRap company that came out of the early Darwin reviews. Martijn Elserman, Erik de Bruin, and Siert Wijnia who couldn’t get the Darwin to work so they designed a new printer and took it to market. After a few iterations, they came up with the Ultimaker 2 and have since been growing and releasing new printers  A few years later, a team of Chinese makers, Jack Chen, Huilin Liu, Jingke Tang, Danjun Ao, and Dr. Shengui Chen took the RepRap designs and started a company to manufacturing (Do It Yourself) kits called Creality. They have maintained the open source manifesto of 3D printing that they inherited from RepRap and developed version after version, even raising over $33M to develop the Ender6 on Kickstarter in 2018, then building a new factory and now have the capacity to ship well over half a million printers a year. The future of 3D Printing We can now buy 3D printing pens, over 170 3D Printer manufacturers including 3D systems, Stratasys, and Ceality but also down-market solutions like Fusion3, Formlabs, Desktop Metal, Prusa, and Voxel8. There’s also a RecycleBot concept and additional patents expiring every year.  There is little doubt that at some point, instead of driving to Home Depot to get screws or basic parts, we’ll print them. Need a new auger for the snow blower? Just print it. Cover on the weed eater break?  Print it. Need a dracolich mini for the next Dungeons and Dragons game? Print it. Need a new pinky toe. OK, maybe that’s a bit far. Or is it? In 2015, Swedish Cellink releases bio-ink made from seaweed and algae, which could be used to print cartilage and later released the INKREDIBLE 3D printer for bio printing. The market in 2020 was valued at $13.78 billion with 2.1 million printers shipped. That’s expected to grow at a compound annual growth rate of 21% for the next few years. But a lot of that is healthcare, automotive, aerospace, and prototyping still. Apple made the personal computer simple and elegant. But no Apple has emerged for 3D printing. Instead it still feels like the Apple II era, where there are 3D printers in a lot of schools and many offer classes on generating files and printing.  3D printers are certainly great for prototypers and additive manufacturing. They’re great for hobbyists, which we call makers these days. But there will be a time when there is a printer in most homes, the way we have electricity, televisions, phones, and other critical technologies. But there are a few things that have to happen first, to make the printers easier to use. These include: Every printer needs to automatically level. This is one of the biggest reasons jobs fail and new users become frustrated. More consistent filament. Spools are still all just a little bit different. Printers need sensors in the extruder that detect if a job should be paused because the filament is jammed, humid, or caught. This adds the ability to potentially resume print jobs and waste less filament and time. Automated slicing in the printer microcode that senses the filament and slices. Better system boards (e.g. there’s a tool called Klipper that moves the math from the system board on a Creality Ender 3 to a Raspberry Pi). Cameras on the printer should watch jobs and use TinyML to determine if they are going to fail as early as possible to halt printing so it can start over. Most of the consumer solutions don’t have great support. Maybe users are limited to calling a place in a foreign country where support hours don’t make sense for them or maybe the products are just too much of a hacker/maker/hobbyist solution. There needs to be an option for color printing. This could be a really expensive sprayer or ink like inkjet printers use at first We love to paint minis we make for Dungeons and Dragons but could get amazingly accurate resolutions to create amazing things with automated coloring.  For a real game changer, the RecycleBot concept needs to be merged with the printer. Imagine if we dropped our plastics into a recycling bin that 3D printers of the world used to create filament. This would help reduce the amount of plastics used in the world in general. And when combined with less moving around of cheap plastic goods that could be printed at home, this also means less energy consumed by transporting goods. The 3D printing technology is still a generation or two away from getting truly mass-marketed. Most hobbyists don’t necessarily think of building an elegant, easy-to-use solution because they are so experienced it’s hard to understand what the barriers of entry are for any old person. But the company who finally manages to crack that nut might just be the next Apple, Microsoft, or Google of the world.
5/3/202330 minutes, 59 seconds
Episode Artwork

Adobe: From Pueblos to Fonts and Graphics to Marketing

The Mogollon culture was an indigenous culture in the Western United States and Mexico that ranged from New Mexico and Arizona to Sonora, Mexico and out to Texas. They flourished from around 200 CE until the Spanish showed up and claimed their lands. The cultures that pre-existed them date back thousands more years, although archaeology has yet to pinpoint exactly how those evolved. Like many early cultures, they farmed and foraged. As they farmed more, their homes become more permanent and around 800 CE they began to create more durable homes that helped protect them from wild swings in the climate. We call those homes adobes today and the people who lived in those peublos and irrigated water, often moving higher into mountains, we call the Peubloans - or Pueblo Peoples. Adobe homes are similar to those found in ancient cultures in what we call Turkey today. It’s an independent evolution. Adobe Creek was once called Arroyo de las Yeguas by the monks from Mission Santa Clara and then renamed to San Antonio Creek by a soldier Juan Prado Mesa when the land around it was given to him by the governor of Alto California at the time, Juan Bautista Alvarado. That’s the same Alvarado as the street if you live in the area. The creek runs for over 14 miles north from the Black Mountain and through Palo Alto, California. The ranchers built their adobes close to the creeks. American settlers led the Bear Flag Revolt in 1846, and took over the garrison of Sonoma, establishing the California Republic - which covered much of the lands of the Peubloans. There were only 33 of them at first, but after John Fremont (yes, he of whom that street is named after as well) encouraged the Americans, they raised an army of over 100 men and Fremont helped them march on Sutter’s fort, now with the flag of the United States, thanks to Joseph Revere of the US Navy (yes, another street in San Francisco bears his name).  James Polk had pushed to expand the United States. Manfiest Destiny. Remember The Alamo. Etc. The fort at Monterey fell, the army marched south. Admiral Sloat got involved. They named a street after him. General Castro surrendered - he got a district named after him. Commodore Stockton announced the US had taken all of Calfironia soon after that. Manifest destiny was nearly complete. He’s now basically the patron saint of a city, even if few there know who he was. The forts along the El Camino Real that linked the 21 Spanish Missions, a 600-mile road once walked by their proverbial father, Junípero Serra following the Portolá expedition of 1769, fell. Stockton took each, moving into Los Angeles, then San Diego. Practically all of Alto California fell with few shots. This was nothing like the battles for the independence of Texas, like when Santa Anna reclaimed the Alamo Mission.  Meanwhile, the waters of Adobe Creek continued to flow. The creek was renamed in the 1850s after Mesa built an adobe on the site. Adobe Creek it was. Over the next 100 years, the area evolved into a paradise with groves of trees and then groves of technology companies. The story of one begins a little beyond the borders of California.  Utah was initialy explored by Francisco Vázquez de Coronado in 1540 and settled by Europeans in search of furs and others who colonized the desert, including those who established the Church of Jesus Christ of Latter-day Saints, or the Mormons - who settled there in 1847, just after the Bear Flag Revolt. The United States officially settled for the territory in 1848 and Utah became a territory and after a number of map changes wher ethe territory got smaller, was finally made a state in 1896. The University of Utah had been founded all the way back in 1850, though - and re-established in the 1860s.  100 years later, the University of Utah was a hotbed of engineers who pioneered a number of graphical advancements in computing. John Warnock went to grad school there and then went on to co-found Adobe and help bring us PostScript. Historically, PS, or Postscript was a message to be placed at the end of a letter, following the signature of the author. The PostScript language was a language to describe a page of text computationally. It was created by Adobe when Warnock, Doug Brotz, Charles Geschke, Bill Paxton (who worked on the Mother of All Demos with Doug Englebart during the development of Online System, or NLS in the late 70s and then at Xerox PARC), and Ed Taft. Warnock invented the Warnock algorithm while working on his PhD and went to work at Evans & Sutherland with Ivan Sutherland who effectively created the field of computer graphics. Geschke got his PhD at Carnegie Melon in the early 1970s and then went of to Xerox PARC. They worked with Paxton at PARC and before long, these PhDs and mathematicians had worked out the algorithms and then the languages to display images on computers while working on InterPress graphics at Xerox and Gerschke left Xerox and started Adobe. Warnock joined them and they went to market with Interpress as PostScript, which became a foundation for the Apple LaswerWriter to print graphics.  Not only that, PostScript could be used to define typefaces programmatically and later to display any old image.    Those technologies became the foundation for the desktop publishing industry. Apple released the 1984 Mac and other vendors brought in PostScript to describe graphics in their proprietary fashion and by 1991 they released PostScript Level 2 and then PostScript 3 in 1997. Other vendors made their own or furthered standards in their own ways and Adobe could have faded off into the history books of computing. But Adobe didn’t create one product, they created an industry and the company they created to support that young industry created more products in that mission.  Steve Jobs tried to buy Adobe before that first Mac as released, for $5,000,000. But Warnock and Geschke had a vision for an industry in mind. They had a lot of ideas but development was fairly capital intensive, as were go to market strategies. So they went public on the NASDAQ in 1986. They expanded their PostScript distribution and sold it to companies like Texas Instruments for their laser printer, and other companies who made IBM-compatible companies. They got up to $16 million in sales that year. Warnock’s wife was a graphic designer. This is where we see a diversity of ideas help us think about more than math. He saw how she worked and could see a world where Ivan Sutherland’s Sketchpad was much more given how far CPUs had come since the TX-0 days at MIT. So Adobe built and released Illustrator in 1987. By 1988 they broke even on sales and it raked in $19 million in revenue. Sales were strong in the universities but PostScript was still the hot product, selling to printer companies, typesetters, and other places were Adobe signed license agreements.  At this point, we see where the math, cartesian coordinates, drawn by geometric algorithms put pixels where they should be. But while this was far more efficient than just drawing a dot in a coordinate for larger images, drawing a dot in a pixel location was still the easier technology to understand.  They created Adobe Screenline in 1989 and Collectors Edition to create patterns. They listened to graphic designers and built what they heard humans wanted. Photoshop Nearly every graphic designer raves about Adobe Photoshop. That’s because Photoshop is the best selling graphics editorial tool that has matured far beyond most other traditional solutions and now has thousands of features that allow users to manipulate images in practically any way they want.  Adobe Illustrator was created in 1987 and quickly became the de facto standard in vector-based graphics. Photoshop began life in 1987 as well, when Thomas and John Knoll, wanted to build a simpler tool to create graphics on a computer. Rather than vector graphics they created a raster graphical editor.  They made a deal with Barneyscan, a well-known scanner company that managed to distribute over two hundred copies of Photoshop with their scanners and Photoshop became a hit as it was the first editing software people heard about. Vector images are typically generated with Cartesian coordinates based on geometric formulas and so scale out more easily. Raster images are comprised of a grid of dots, or pixels, and can be more realistic.  Great products are rewarded with competitions. CorelDRAW was created in 1989 when Michael Bouillon and Pat Beirne built a tool to create vector illustrations. The sales got slim after other competitors entered the market and the Knoll brothers got in touch with Adobe and licensed the product through them. The software was then launched as Adobe Photoshop 1 in 1990. They released Photoshop 2 in 1991. By now they had support for paths, and given that Adobe also made Illustrator, EPS and CMYK rasterization, still a feature in Photoshop.  They launched Adobe Photoshop 2.5 in 1993, the first version that could be installed on Windows. This version came with a toolbar for filters and 16-bit channel support. Photoshop 3 came in 1994 and Thomas Knoll created what was probably one of the most important features added, and one that’s become a standard in graphical applications since, layers. Now a designer could create a few layers that each had their own elements and hide layers or make layers more transparent. These could separate the subject from the background and led to entire new capabilities, like an almost faux 3 dimensional appearance of graphics..  Then version four in 1996 and this was one of the more widely distributed versions and very stable. They added automation and this was later considered part of becoming a platform - open up a scripting language or subset of a language so others built tools that integrated with or sat on top of those of a product, thus locking people into using products once they automated tasks to increase human efficiency.  Adobe Photoshop 5.0 added editable type, or rasterized text. Keep in mind that Adobe owned technology like PostScript and so could bring technology from Illustrator to Photoshop or vice versa, and integrate with other products - like export to PDF by then. They also added a number of undo options, a magnetic lasso, improved color management and it was now a great tool for more advanced designers. Then in 5.5 they added a save for web feature in a sign of the times. They could created vector shapes and continued to improve the user interface. Adobe 5 was also a big jump in complexity. Layers were easy enough to understand, but Photoshop was meant to be a subset of Illustrator features and had become far more than that. So in 2001 they released Photoshop Elements. By now they had a large portfolio of products and Elements was meant to appeal to the original customer base - the ones who were beginners and maybe not professional designers. By now, some people spent 40 or more hours a day in tools like Photoshop and Illustrator.  Adobe Today Adobe had released PostScript, Illustrator, and Photoshop. But they have one of the most substantial portfolios of products of any company. They also released Premiere in 1991 to get into video editing. They acquired Aldus Corporation to get into more publishing workflows with PageMaker. They used that acquisition to get into motion graphics with After Effects. They acquired dozens of companies and released their products as well. Adobe also released the PDF format do describe full pages of information (or files that spread across multiple pages) in 1993 and Adobe Acrobat to use those. Acrobat became the de facto standard for page distribution so people didn’t have to download fonts to render pages properly. They dabbled in audio editing when they acquired Cool Edit Pro from Syntrillium Software and so now sell Adobe Audition.  Adobe’s biggest acquisition was Macromedia in 2005. Here, they added a dozen new products to the portfolio, which included Flash, Fireworks, WYSYWIG web editor Dreamweaver, ColdFusion, Flex, and Breeze, which is now called Adobe Connect. By now, they’d also created what we call Creative Suite, which are packages of applications that could be used for given tasks. Creative Suite also signaled a transition into a software as a service, or SaaS mindset. Now customers could pay a monthly fee for a user license rather than buy large software packages each time a new version was released. Adobe had always been a company who made products to create graphics. They expanded into online marketing and web analytics when they bought Omniture in 2009 for $1.8 billion. These products are now normalized into the naming convention used for the rest as Adobe Marketing Cloud. Flash fell by the wayside and so the next wave of acquisitions were for more mobile-oriented products. This began with Day Software and then Nitobi in 2011. And they furthered their Marketing Cloud support with an acquisition of one of the larger competitors when they acquired Marketo in 2018 and acquiring Workfront in 2020.  Given how many people started working from home, they also extended their offerings into pure-cloud video tooling with an acquisition of Frame.io in 2021. And here we see a company started by a bunch of true computer sciencists from academia in the early days of the personal computer that has become far more. They could have been rolled into Apple but had a vision of a creative suite of products that could be used to make the world a prettier place. Creative Suite then Creative Cloud shows a move of the same tools into a more online delivery model. Other companies come along to do similar tasks, like infinite digital whiteboard Miro - so they have to innovate to stay marketable. They have to continue to increase sales so they expand into other markets like the most adjacent Marketing Cloud.  At 22,500+ employees and with well over $12 billion in revenues, they have a lot of families dependent on maintaining that growth rate. And so the company becomes more than the culmination of their software. They become more than graphic design, web design, video editing, animation, and visual effects. Because in software, if revenues don’t grow at a rate greater than 10 percent per year, the company simply isn’t outgrowing the size of the market and likely won’t be able to justify stock prices at an inflated earnings to price ratio that shows explosive growth. And yet once a company saturates sales in a given market they have shareholders to justify their existence to. Adobe has survived many an economic downturn and boom time with smart, measured growth and is likely to continue doing so for a long time to come.
4/16/202322 minutes, 2 seconds
Episode Artwork

The Evolution of Fonts on Computers

Gutenburg shipped the first working printing press around 1450 and typeface was born. Before then most books were hand written, often in blackletter calligraphy. And they were expensive.    The next few decades saw Nicolas Jensen develop the Roman typeface, Aldus Manutius and Francesco Griffo create the first italic typeface. This represented a period where people were experimenting with making type that would save space. The 1700s saw the start of a focus on readability. William Caslon created the Old Style typeface in 1734. John Baskerville developed Transitional typefaces in 1757. And Firmin Didot and Giambattista Bodoni created two typefaces that would become the modern family of Serif. Then slab Serif, which we now call Antique, came in 1815 ushering in an era of experimenting with using type for larger formats, suitable for advertisements in various printed materials. These were necessary as more presses were printing more books and made possible by new levels of precision in the metal-casting. People started experimenting with various forms of typewriters in the mid-1860s and by the 1920s we got Frederic Goudy, the first real full-time type designer. Before him, it was part of a job. After him, it was a job. And we still use some of the typefaces he crafted, like Copperplate Gothic. And we saw an explosion of new fonts like Times New Roman in 1931. At the time, most typewriters used typefaces on the end of a metal shaft. Hit a kit, the shaft hammers onto a strip of ink and leaves a letter on the page. Kerning, or the space between characters, and letter placement were often there to reduce the chance that those metal hammers jammed. And replacing a font would have meant replacing tons of precision parts. Then came the IBM Selectric typewriter in 1961. Here we saw precision parts that put all those letters on a ball. Hit a key, the ball rotates and presses the ink onto the paper. And the ball could be replaced. A single document could now have multiple fonts without a ton of work. Xerox exploded that same year with the Xerox 914, one of the most successful products of all time. Now, we could type amazing documents with multiple fonts in the same document quickly - and photocopy them. And some of the numbers on those fancy documents were being spat out by those fancy computers, with their tubes. But as computers became transistorized heading into the 60s, it was only a matter of time before we put fonts on computer screens. Here, we initially used bitmaps to render letters onto a screen. By bitmap we mean that a series, or an array of pixels on a screen is a map of bits and where each should be displayed on a screen. We used to call these raster fonts, but the drawback was that to make characters bigger, we needed a whole new map of bits. To go to a bigger screen, we probably needed a whole new map of bits. As people thought about things like bold, underline, italics, guess what - also a new file. But through the 50s, transistor counts weren’t nearly high enough to do something different than bitmaps as they rendered very quickly and you know, displays weren’t very high quality so who could tell the difference anyways.  Whirlwind was the first computer to project real-time graphics on the screen and the characters were simple blocky letters. But as the resolution of screens and the speed of interactivity increased, so did what was possible with drawing glyphs on screens.  Rudolf Hell was a German, experimenting with using cathode ray tubes to project a CRT image onto paper that was photosensitive and thus print using CRT. He designed a simple font called Digital Grotesk, in 1968. It looked good on the CRT and the paper. And so that font would not only be used to digitize typesetting, loosely based on Neuzeit Book. And we quickly realized bitmaps weren’t efficient to draw fonts to screen and by 1974 moved to outline, or vector, fonts. Here a Bézier curve was drawn onto the screen using an algorithm that created the character, or glyph using an outline and then filling in the space between. These took up less memory and so drew on the screen faster. Those could be defined in an operating system, and were used not only to draw characters but also by some game designers to draw entire screens of information by defining a character as a block and so taking up less memory to do graphics.  These were scalable and by 1979 another German, Peter Karow, used spline algorithms wrote Ikarus, software that allowed a person to draw a shape on a screen and rasterize that. Now we could graphically create fonts that were scalable.  In the meantime, the team at Xerox PARC had been experimenting with different ways to send pages of content to the first laser printers. Bob Sproull and Bill Newman created the Press format for the Star. But this wasn’t incredibly flexible like what Karow would create. John Gaffney who was working with Ivan Sutherland at Evans & Sutherland, had been working with John Warnock on an interpreter that could pull information from a database of graphics. When he went to Xerox, he teamed up with Martin Newell to create J&M, which harnessed the latest chips to process graphics and character type onto printers. As it progressed, they renamed it to Interpress. Chuck Geschke started the Imaging Sciences Laboratory at Xerox PARC and eventually left Xerox with Warnock to start a company called Adobe in Warnock’s garage, which they named after a creek behind his house. Bill Paxton had worked on “The Mother of All Demos” with Doug Engelbart at Stanford, where he got his PhD and then moved to Xerox PARC. There he worked on bitmap displays, laser printers, and GUIs - and so he joined Adobe as a co-founder in 1983 and worked on the font algorithms and helped ship a page description language, along with Chuck Geschke, Doug Brotz, and Ed Taft.  Steve Jobs tried to buy Adobe in 1982 for $5 million. But instead they sold him just shy of 20% of the company and got a five-year license for PostScript. This allowed them to focus on making the PostScript language more extensible, and creating the Type 1 fonts. These had 2 parts. One that was a set of bit maps And another that was a font file that could be used to send the font to a device.  We see this time and time again. The simpler an interface and the more down-market the science gets, the faster we see innovative industries come out of the work done. There were lots of fonts by now. The original 1984 Mac saw Susan Kare work with Jobs and others to ship a bunch of fonts named after cities like Chicago and San Francisco. She would design the fonts on paper and then conjure up the hex (that’s hexadecimal) for graphics and fonts. She would then manually type the hexadecimal notation for each letter of each font.  Previously, custom fonts were reserved for high end marketing and industrial designers. Apple considered licensing existing fonts but decided to go their own route. She painstakingly created new fonts and gave them the names of towns along train stops around Philadelphia where she grew up. Steve Jobs went for the city approach but insisted they be cool cities. And so the Chicago, Monaco, New York, Cairo, Toronto, Venice, Geneva, and Los Angeles fonts were born - with her personally developing Geneva, Chicago, and Cairo. And she did it in 9 x 7.  I can still remember the magic of sitting down at a computer with a graphical interface for the first time. I remember opening MacPaint and changing between the fonts, marveling at the typefaces. I’d certainly seen different fonts in books. But never had I made a document and been able to set my own typeface! Not only that they could be in italics, outline, and bold. Those were all her. And she inspired a whole generation of innovation. Here, we see a clean line from Ivan Sutherland and the pioneering work done at MIT to the University of Utah to Stanford through the oNLine System (or NLS) to Xerox PARC and then to Apple. But with the rise of Windows and other graphical operating systems. As Apple’s 5 year license for PostScript came and went they started developing their own font standard as a competitor to Adobe, which they called TrueType. Here we saw Times Roman, Courier, and symbols that could replace the PostScript fonts and updating to Geneva, Monaco, and others. They may not have gotten along with Microsoft, but they licensed TrueType to them nonetheless to make sure it was more widely adopted. And in exchange they got a license for TrueImage, which was a page description language that was compatible with PostScript. Given how high resolution screens had gotten it was time for the birth of anti-aliasing. He we could clean up the blocky “jaggies” as the gamers call them. Vertical and horizontal lines in the 8-bit era looked fine but distorted at higher resolutions and so spatial anti-aliasing and then post-processing anti-aliasing was born. By the 90s, Adobe was looking for the answer to TrueImage. So 1993 brought us PDF, now an international standard in ISO 32000-1:2008. But PDF Reader and other tools were good to Adobe for many years, along with Illustrator and then Photoshop and then the other products in the Adobe portfolio. By this time, even though Steve Jobs was gone, Apple was hard at work on new font technology that resulted in Apple Advanced Typography, or AAT. AAT gave us ligature control, better kerning and the ability to write characters on different axes.  But even though Jobs was gone, negotiations between Apple and Microsoft broke down to license AAT to Microsoft. They were bitter competitors and Windows 95 wasn’t even out yet. So Microsoft started work on OpenType, their own font standardized language in 1994 and Adobe joined the project to ship the next generation in 1997. And that would evolve into an open standard by the mid-2000s. And once an open standard, sometimes the de facto standard as opposed to those that need to be licensed. By then the web had become a thing. Early browsers and the wars between them to increment features meant developers had to build and test on potentially 4 or 5 different computers and often be frustrated by the results. So the WC3 began standardizing how a lot of elements worked  in Extensible Markup Language, or XML. Images, layouts, colors, even fonts. SVGs are XML-based vector image. In other words the browser interprets a language that displays the image. That became a way to render Web Open Format or WOFF 1 was published in 2009 with contributions by Dutch educator Erik van Blokland, Jonathan Kew, and Tal Leming. This built on the CSS font styling rules that had shipped in Internet Explorer 4 and would slowly be added to every browser shipped, including Firefox since 3.6, Chrome since 6.0, Internet Explorer since 9, and Apple’s Safari since 5.1. Then WOFF 2 added Brotli compression to get sizes down and render faster. WOFF has been a part of the W3C open web standard since 2011.  Out of Apple’s TrueType came TrueType GX, which added variable fonts. Here, a single font file could contain a number or range of variants to the initial font. So a family of fonts could be in a single file. OpenType added variable fonts in 2016, with Apple, Microsoft, and Google all announcing support. And of course the company that had been there since the beginning, Adobe, jumped on board as well. Fewer font files, faster page loads.  So here we’ve looked at the progression of fonts from the printing press, becoming more efficient to conserve paper, through the advent of the electronic typewriter to the early bitmap fonts for screens to the vectorization led by Adobe into the Mac then Windows. We also see rethinking the font entirely so multiple scripts and character sets and axes can be represented and rendered efficiently.  I am now converting all my user names into pig Latin for maximum security. Luckily those are character sets that are pretty widely supported. The ability to add color to pig Latin means that OpenType-SVG will allow me add spiffy color to my glyphs. It makes us wonder what’s next for fonts. Maybe being able to design our own, or more to the point, customize those developed by others to make them our own. We didn’t touch on emoji yet. But we’ll just have to save the evolution of character sets and emoji for another day. In the meantime, let’s think on the fact that fonts are such a big deal because Steve Jobs took a caligraphy class from a Trappist monk named Robert Palladino while enrolled at Reed College. Today we can painstakingly choose just the right font with just the right meaning because Palladino left the monastic life to marry and have a son. He taught jobs about serif and san serif and kerning and the art of typography.  That style and attention to detail was one aspect of the original Mac that taught the world that computers could have style and grace as well. It’s not hard to imagine if entire computers still only supported one font or even one font per document. Palladino never owned or used a computer though. His influence can be felt through the influence his pupil Jobs had. And it’s actually amazing how many people who had such dramatic impacts on computing never really used one. Because so many smaller evolutions came after them. What evolutions do we see on the horizon today? And how many who put a snippet of code on a service like GitHub may never know the impact they have on so many?
4/10/202320 minutes, 4 seconds
Episode Artwork

Flight Part II: From Balloons to Autopilot to Drones

In our previous episode, we looked at the history of flight - from dinosaurs to the modern aircraft that carry people and things all over the world. Those helped to make the world smaller, but UAVs and drones have had a very different impact in how we lead our lives - and will have an even more substantial impact in the future. That might not have seemed so likely in the 1700s, though - when unmann Unmanned Aircraft Napoleon conquered Venice in 1797 and then ceded control to the Austrians the same year. He then took it as part of a treaty in 1805 and established the first Kingdom of Italy. Then lost it in 1814. And so they revolted in 1848. One of the ways the Austrians crushed the revolt, in part employing balloons, which had been invented in 1783, that were packed with explosives. 200 balloons packed with bombs later, one found a target. Not a huge surprise that such techniques didn’t get used again for some time. The Japanese tried a similar tactic to bomb the US in World War II - then there were random balloons in the 2020s, just for funsies. A few other inventions needed to find one another in order to evolve into something entirely new. Radio was invented in the 1890s. Nikola Tesla built a radio controlled boat in 1898. Airplanes came along in 1903. Then came airships moved by radio. So it was just a matter of time before the cost of radio equipment came down enough to match the cost of building smaller airplanes that could be controlled with remote controls as well.  The first documented occurrence of that was in 1907 when Percy Sperry filed a patent for a kite fashioned to look and operate like a plane, but glide in the wind. The kite string was the first remote control. Then electrical signals went through those strings and eventually the wire turned into radio - the same progress we see with most manual machinery that needs to be mobile. Technology moves upmarket, so Sperry Corporation the aircraft with autopilot features in 1912. At this point, that was just a gyroscopic heading indicator and attitude indicator that had been connected to hydraulically operated elevators and rudders but over time would be able to react to all types of environmental changes to save pilots from having to constantly manually react while flying. That helped to pave the way for longer and safer flights, as automation often does. Then came World War I. Tesla discussed aerial combat using unmanned aircraft in 1915 and Charles Kettering (who developed the electric cash register and the electric car starter) gave us The Kettering Bug, a flying, remote controlled torpedo of sorts. Elmer Sperry worked on a similar device. British war engineers like Archibald Low were also working on attempts but the technology didn’t evolve fast enough and by the end of the war there wasn’t much interest in military funding. But a couple of decades can do a lot. Both for miniaturization and maturity of technology. 1936 saw the development of the first navy UAV aircraft by the name of Queen Bee by Admiral William H. Stanley then the QF2. They was primarily used for aerial target practice as a low-cost radio-controlled drone. The idea was an instant hit and later on, the military called for the development of similar systems, many of which came from Hollywood of all places. Reginald Denny was a British gunner in World War I. They shot things from airplanes. After the war he moved to Hollywood to be an actor. By the 1930s he got interested in model airplanes that could fly and joined up with Paul Whittier to open a chain of hobby shops. He designed a few planes and eventually grew them to be sold to the US military as targets. The Radioplane as they would be known even got joysticks and they sold tens of thousands during World War II.  War wasn’t the only use for UAVs. Others were experimenting and by 1936 we got the first radio controlled model airplane competition in 1936, a movement that continued to grow and evolve into the 1970s. We got the Academy of Model Aeronautics (or AMA) in 1936, who launched a magazine called Model Aviation and continues to publish, provide insurance, and act as the UAV, RC airplane, and drone community representative to the FAA. Their membership still runs close to 200,000. Most of these model planes were managed from the ground using radio remote controls.  The Federal Communications Commission, or FCC, was established in 1934 to manage the airwaves. They stepped in to manage what frequencies could be used for different use cases in the US, including radio controlled planes. Where there is activity, there are stars. The Big Guff, built by brothers Walt and Bill Guff, was the first truly successful RC airplane in that hobbiest market. Over the next decades solid state electronics got smaller, cheaper, and more practical. As did the way we could transmit bits over those wireless links.  1947 saw the first radar-guided missile, the subsonic Firebird, which over time evolved into a number of programs. Electro-mechanical computers had been used to calculate trajectories for ordinances during World War II so with knowledge of infrared, we got infrared homing then television cameras mounted into missiles and when combined with the proximity fuse, which came with small pressure, magnetic, acoustic, radio, then optical transmitters. We got much better at blowing things up.  Part of that was studying the German V-2 rocket programs. They used an analog computer to control the direction and altitude of missiles. The US Polaris and Minuteman missile programs added transistors then microchips to missiles to control the guidance systems. Rockets had computers and so they showed up in airplanes to aid humans in guiding those, often replacing Sperry’s original gyroscopic automations. The Apollo Guidance Computer from the 1969 moon landing was an early example of times when humans even put their lives in the hands of computers - with manual override capabilities of course. Then as the price of chips fell in the 1980s we started to see them in model airplanes. Modern Drones By now, radio controlled aircraft had been used for target practice, to deliver payloads and blow things up, and even for spying. Aircraft without humans to weight them down could run on electric motors rather than combustable engines. Thus they were quieter. This technology allowed the UAVs to fly undetected thus laying the very foundation for the modern depiction of drones used by the military for covert operations.  As the costs fell and carrying capacity increased, we saw them used in filmmaking, surveying, weather monitoring, and anywhere else a hobbyist could use their hobby in their career. But the cameras weren’t that great yet. Then Fairchild developed the charge-coupled device, or CCD, in 1969. The first digital camera arguably came out of Eastman Kodak in 1975 when Steven Sasson built a prototype using a mixture of batteries, movie camera lenses, Fairchild CCD sensors, and Motorola parts. Sony came out with the Magnetic Video Camera in 1981 and Canon put the RC701 on the market in 1986. Fuji, Dycam, even the Apple QuickTake, came out in the next few years. Cameras were getting better resolution, and as we turned the page into the 1990s, those cameras got smaller and used CompactFlash to store images and video files. The first aerial photograph is attributed to Gaspar Tournachon, but the militaries of the world used UAVs that were B-17 and Grumman Hellcats from World War II that had been converted to drones full of sensors to study nuclear radiation clouds when testing weapons. Those evolved into Reconnaisance drones like the Aerojet SD-2, with mounted analog cameras in the 50s and 60s. During that time we saw the Ryan Firebees and DC-130As run thousands of flights snapping photos to aid intelligence gathering. Every country was in on it. The USSR, Iran, North Korea, Britain. And the DARPA-instigated Amber and then Predator drones might be considered the modern precursor to drones we play with today. Again, we see the larger military uses come down market once secrecy and cost meet a cool factor down-market. DARPA spent $40 million on the Amber program. Manufacturers of consumer drones have certainly made far more than that.  Hobbyists started to develop Do It Yourself (DIY) drone kits in the early 2000s. Now that there were websites, we didn’t have to wait for magazines to show up, we could take to the World Wide Web forums and trade ideas for how to do what the US CIA had done when they conducted the first armed drone strike in 2001 - just maybe without the weapon systems since this was in the back yard.  Lithium-ion batteries were getting cheaper and lighter. As were much faster chips. Robotics had come a long way as well, and moving small parts of model aircraft was much simpler than avoiding all the chairs in a room at Stanford. Hobbyists turned into companies that built and sold drones of all sizes, some of which got in the way of commercial aircraft. So the FAA started issuing drone permits in 2006.  Every technology had a point, where the confluence of all these technologies meets into a truly commercially viable product. We had Wi-Fi, RF (or radio frequency), iPhones, mobile apps, tiny digital cameras in our phones, and even in spy teddy bears, we understood flight, propellers, plastics were heavier-than-air, but lighter than metal. So in 2010 we got the Parrot AR Drone. This was the first drone that was sold to the masses that was just plug and play. And an explosion of drone makers followed, with consumer products ranging from around $20 to hundreds now. Drone races, drone aerogymnastics, drone footage on our Apple and Google TV screens, and with TinyML projects for every possible machine learning need we can imagine, UAVs that stabilize cameras, can find objects based on information we program into it, and any other use we can imagine.  The concept of drones or unmanned aerial vehicles (UAV) has come a long way since the Austrians tried to bomb the Venetians into submission. Today  there are mini drones, foldable drones, massive drones that can carry packages, racing drones, and even military drones programmed to kill. In fact, right now there are debates raging in the UN around whether to allow drones to autonomously kill. Because Skynet.  We’re also experimenting with passenger drone technology. Because autonomous driving is another convergence just waiting in the wings. Imagine going to the top of a building and getting in a small pod then flying a few buildings over - or to the next city. Maybe in our lifetimes, but not as soon as some of the companies who have gone public to do just this thought. 
4/3/202319 minutes, 6 seconds
Episode Artwork

Flight: From Dinosaurs to Space

Humans have probably considered flight since they found birds. As far as 228 million years ago, the Pterosaurs used flight to reign down onto other animals from above and eat them. The first known bird-like dinosaur was the Archaeopteryx, which lived around 150 million years ago. It’s not considered an ancestor of modern birds - but other dinosaurs from the same era, the theropods, are. 25 million years later, in modern China, the Confuciusornis sanctus had feathers and could have flown. The first humans wouldn’t emerge from Africa until 23 million years later. By the 2300s BCE, the Summerians depicted shepherds riding eagles, as humanity looked to the skies in our myths and legends. These were creatures, not vehicles. The first documented vehicle of flight was as far back as the 7th century BCE when the Rāmāyana told of the Pushpaka Vimāna, a palace made by Vishwakarma for Brahma, complete with chariots that flew the king Rama high into the atmosphere. The Odyssey was written around the same time and tells of the Greek pantheon of Gods but doesn’t reference flight as we think of it today. Modern interpretations might move floating islands to the sky, but it seems more likely that the floating island of Aeollia is really the islands off Aeolis, or Anatolia, which we might refer to as the modern land of Turkey.  Greek myths from a few hundred years later introduced more who were capable of flight. Icarus flew into the sun with wings that had been fashioned by Daedalus. By then, they could have been aware, through trade routes cut by Alexander and later rulers, of kites from China. The earliest attempts at flight trace their known origins to 500 BCE in China. Kites were, like most physical objects, heavier than air and could still be used to lift an object into flight. Some of those early records even mention the ability to lift humans off the ground with a kite. The principle used in kites was used later in the development of gliders and then when propulsion was added, modern aircraft. Any connection between any of these is conjecture as we can’t know how well the whisper net worked in those ages. Many legends are based on real events. The history of humanity is vast and many of our myths are handed down through the generations. The Greeks had far more advanced engineering capabilities than some of the societies that came after. They were still weary of what happened if they flew too close to the sun. In fact, emperors of China are reported to have forced some to leap from cliffs on a glider as a means of punishment. Perhaps that was where the fear of flight for some originated from. Chinese emperor Wang Mang used a scout with bird features to glide on a scouting mission around the same time as the Icarus myth might have been documented. Whether this knowledge informed the storytellers Ovid documented in his story of Icarus is lost to history, since he didn’t post it to Twitter. Once the Chinese took the string off the kite and they got large enough to fly with a human, they had also developed hang gliders. In the third century BCE, Chinese inventors added the concept of rotors for vertical flight  when they developed helicopter-style toys. Those were then used to frighten off enemies. Some of those evolved into the beautiful paper lanterns that fly when lit.There were plenty of other evolutions and false starts with flight after that. Abbas ibn Ferns also glided with feathers in the 9th century. A Benedictine monk did so again in the 11th century. Both were injured when they jumped out of towers in the Middle Ages that spanned the Muslim Golden Age to England.  Leonardo da Vinci studied flight for much of his life. His studies produced another human-power ornithopter and other contraptions; however he eventually realized that humans would not be able to fly on their own power alone. Others attempted the same old wings made of bird feathers, wings that flapped on the arms, wings tied to legs, different types of feathers, finding higher places to jump from, and anything they could think of. Many broke bones, which continued until we found ways to supplement human power to propel us into the air. Then a pair of brothers in the Ottoman Empire had some of the best luck. Hezarafen Ahmed Çelebi crossed the Bosphorus strait on a glider. That was 1633, and by then gunpowder already helped the Ottomans conquer Constantinople. That ended the last vestiges of ancient Roman influence along with the Byzantine empire as the conquerers renamed the city to Instanbul. That was the power of gunpowder. His brother then built a rocket using gunpowder and launched himself high in the air, before he glided back to the ground.  The next major step was the hot air balloon. The modern hot air balloon was built by the Montgolfier brothers in France and first ridden in 1783 and (Petrescu & Petrescu, 2013). 10 days later, the first gas balloon was invented by Nicholas Louis Robert and Jacques Alexander Charles. The gas balloon used hydrogen and in 1785, used to cross the English Channel. That trip sparked the era of dirigibles. We built larger balloons to lift engines with propellers. That began a period that culminated with the Zeppelin. From the 1700s and on, much of what da Vinci realized was rediscovered, but this time published, and the body of knowledge built out. The physics of flight were then studied as new sciences emerged. Sir George Cayley started to actually apply physics to flight in the 1790s.  Powered Flight We see this over and over in history; once we understand the physics and can apply science, progress starts to speed up. That was true when Archimedes defined force multipliers with the simple machines in the 3rd century BCE, true with solid state electronics far later, and true with Cayley’s research. Cayley conducted experiments, documented his results, and proved hypotheses. He finally got to codifying bird flight and why it worked. He studied the Chinese tops that worked like modern helicopters. He documented glided flight and applied math to why it worked. He defined drag and measured the force of windmill blades. In effect, he got to the point that he knew how much power was required based on the ratio of weight to actually sustain flight. Then to achieve that, he explored the physics of fixed-wing aircraft, complete with an engine, tail assembly, and fuel. His work culminated in a work called “On Aerial Navigation” that was published in 1810.  By the mid-1850s, there was plenty of research that flowed into the goal for sustained air travel. Ideas like rotors led to rotor crafts. Those were all still gliding. Even with Cayley’s research, we had triplane gliders, gliders launched from balloons. After that, the first aircrafts that looked like the modern airplanes we think of today were developed. Cayley’s contributions were profound. He even described how to mix air with gasoline to build an engine. Influenced by his work, others built propellers. Some of those were steam powered and others powered by tight springs, like clockworks. Aeronautical societies were created, wing counters and cambering were experimented with, and wheels were added to try to lift off. Some even lifted a little off the ground. By the 1890s, the first gasoline powered biplane gliders were developed and flown, even if those early experiments crashed. Humanity was finally ready for powered flight. The Smithsonian housed some of the earliest experiments. They hired their third director, Samuel Langley, in 1887. He had been interested in aircraft for decades and as with many others had studied the Cayley work closely. He was a consummate tinkerer and had already worked in solar physics and developed the Allegheny Time System. The United States War department gave him grants to pursue his ideas to build an airplane. By then, there was enough science that humanity knew it was possible to fly and so there was a race to build powered aircraft. We knew the concepts of drag, rudders, thrust from some of the engineering built into ships. Some of that had been successfully used in the motorcar. We also knew how to build steam engines, which is what he used in his craft. He called it the Aerodrome and built a number of models. He was able to make it further than anyone at the time. He abandoned flight in 1903 when someone beat him to the finish line.  That’s the year humans stepped beyond gliding and into the first controlled, sustained, and powered flight. There are reports that Gustave Whitehead beat the Wright Brothers, but he didn’t keep detailed notes or logs, and so the Wrights are often credited with the discovery. They managed to solve the problem of how to roll, built steerable rudders, and built the first biplane with an internal combustion engine. They flew their first airplane out of North Carolina when Orville Wright went 120 feet and his brother went 852 feet later that day. That plane now lives at the National Air and Space Museum in Washington DC and December 17th, 1903 represents the start of the age of flight. The Wright’s spent two years testing gliders and managed to document their results. They studied in wind tunnels, tinkered with engines, and were methodical if not scientific in their approach. They didn’t manage to have a public demonstration until 1908 though and so there was a lengthy battle over the patents they filed. Turns out it was a race and there were a lot of people who flew within months of one another. Decades of research culminated into what had to be: airplanes. Innovation happened quickly. Flight improved enough that planes could cross English Channel by 1909. There were advances after that, but patent wars over the invention drug on and so investors stayed away from the unproven technology.  Flight for the Masses The superpowers of the world were at odds for the first half of the 1900s. An Italian pilot flew a reconnaissance mission in Libya in the Italo-Turkish war in 1911. It took only 9 days before they went from just reconnaissance and dropped grenades on Turkish troops from the planes. The age of aerial warfare had begun. The Wrights had received an order for the first plane from the military back in 1908. Military powers took note and by World War I there was an air arm of every military power. Intelligence wins wars. The innovation was ready for the assembly lines, so during and after the war, the first airplane manufacturers were born. Dutch engineer Anthony Fokker was inspired by Wilbur Wright’s exhibition in 1908. He went on to start a company and design the Fokker M.5, which evolved into the Fokker E.I. after World War I broke out in 1914. They mounted a machine gun and synchronized it to the  propeller in 1915. Manfred von Richthofen, also known as the Red Baron, flew one before he upgraded to the Fokker D.VII and later an Albatros. Fokker made it all the way into the 1990s before they went bankrupt. Albatros was founded in 1909 by Enno Huth, who went on to found the German Air Force before the war. The Bristol Aeroplane Company was born in 1910 after Sir George White, who was involved in transportation already, met Wilbur Wright in France. Previous companies were built to help hobbyists, similar to how many early PC companies came from inventors as well. This can be seen with people like Maurice Mallet, who helped design gas balloons and dirigibles. He licensed airplane designs to Bristol who later brought in Frank Barnwell and other engineers that helped design the Scout. They based the Bristol Fighters that were used in World War I on those designs. Another British manufacturer was Sopwith, started by Thomas Sopwith, who taught himself to fly and then started a company to make planes. They built over 16,000 by the end of the war. After the war they pivoted to make ABC motorcycles and eventually sold to Hawker Aircraft in 1920, which later sold to Raytheon.  The same paradigm played out elsewhere in the world, including the United States. Once those patent disputes were settled, plenty knew flight would help change the world. By 1917 the patent wars in the US had to end as the countries contributions to flight suffered. No investor wanted to touch the space and so there was a lack of capital to expand. Orville Write passed away in 1912 and Wilbur sold his rights to the patents, so the Assistant Secretary of the Navy, Franklin D. Roosevelt, stepped in and brought all the parties to the table to develop a cross-licensing organization. After almost 25 years, we could finally get innovation in flight back on track globally. In rapid succession, Loughead Aircraft, Lockheed, and Douglas Aircraft were founded. Then Jack Northrop left those and started his own aircraft company. Boeing was founded in 1957 as Aero Products and then United Aircraft, which was spun off into United Airlines as a carrier in the 1930s with Boeing continuing to make planes. United was only one of many a commercial airline that was created. Passenger air travel started after the first air flights with the first airline ferrying passengers in 1914. With plenty of airplanes assembled at all these companies, commercial travel was bound to explode into its own big business. Delta started as a cropdusting service in Macon, Georgia in 1925 and has grown into an empire. The worlds largest airline at the time of this writing is American Airlines, which started in 1926 when a number of smaller airlines banded together. Practically every country had at least one airline. Pan American (Panam for short) in 1927, Ryan Air started in 1926, Slow-Air in 1924, Finnair in 1923, Quantus in 1920, KLM in 1919, and the list goes on. Enough that the US passed the Air Commerce Act in 1926, which over time led to the department of Air Commerce, which evolved into the Federal Aviation Administration, or FAA we know today. Aircrafts were refined and made more functional. World War I brought with it the age of aerial combat. Plenty of supply after the war and then the growth of manufacturers Brough further innovation to compete with one another, and commercial aircraft and industrial uses (like cropdusting) enabled more investment into R&D In 1926, the first flying boat service was inaugurated from New York to Argentina. Another significant development in aviation was in the 1930s when the jet engine was invented. This invention was done by Frank Whittle who registered a turbojet engine patent. A jet plane was also developed by Hans von Ohain and was called the Heinkel He 178 (Grant, 2017).  The plane first flew in 1939, but the Whittle jet engine is the ancestor of those found in planes in World War II and beyond. And from there to the monster airliners and stealth fighters or X-15 becomes a much larger story. The aerospace industry continued to innovate both in the skies and into space.  The history of flight entered another phase in the Cold War. Rand corporation developed the concept of Intercontinental Ballistic Missiles (or ICBMs) and the Soviet Union launched the first satellite into space in 1957.  Then in 1969, Neil Armstrong and Buzz Aldrin made the first landing on the moon and we continued to launch into space throughout the 1970s to 1990s, before opening up space travel to private industry. Those projects got bigger and bigger and bigger. But generations of enthusiasts and engineers were inspired by devices far smaller, and without pilots in the device.
3/25/202322 minutes, 57 seconds
Episode Artwork

SABRE and the Travel Global Distribution System

Computing has totally changed how people buy and experience travel. That process seemed to start with sites that made it easy to book travel, but as with most things we experience in our modern lives, it actually began far sooner and moved down-market as generations of computing led to more consumer options for desktops, the internet, and the convergence of these technologies. Systems like SABRE did the original work to re-think travel - to take logic and rules out of the heads of booking and travel agents and put them into a digital medium. In so doing, they paved the way for future generations of technology and to this day retain a valuation of over $2 billion.   SABRE is short for Semi-Automated Business Research Environment. It’s used to manage over a third of global travel, to the tune of over a quarter trillion US dollars a year. It’s used by travel agencies and travel services to reserve car rentals, flights, hotel rooms, and tours. Since Sabre was released services like Amadeus and Travelport were created to give the world a Global Distribution System, or GDS.    Passenger air travel began when airlines ferrying passengers cropped up in 1914 but the big companies began in the 1920s, with KLM in 1919, Finnair in 1923, Delta in 1925, American Airlines and Ryan Air in 1926,  Pan American in 1927, and the list goes on. They grew quickly and by 1926 the Air Commerce Act led to a new department in the government called Air Commerce, which evolved into the FAA, or Federal Aviation Administration in the US. And each country, given the possible dangers these aircraft posed as they got bigger and loaded with more and more fuel, also had their own such departments. The aviation industry blossomed in the roaring 20s as people traveled and found romance and vacation. At the time, most airlines were somewhat regional and people found travel agents to help them along their journey to book travel, lodgings, and often food. The travel agent naturally took over air travel much as they’d handled sea travel before.  But there were dangers in traveling in those years between the two World Wars. Nazis rising to power in Germany, Mussolini in Italy, communist cleansings in Russia and China. Yet, a trip to the Great Pyramid of Giza could now be a week instead of months. Following World War II, there was a fracture in the world between Eastern and Western powers, or those who aligned with the former British empire and those who aligned with the former Russian empire, now known as the Soviet Union. Travel within the West exploded as those areas were usually safe and often happy to accept the US dollar. Commercial air travel boomed not just for the wealthy, but for all. People had their own phones now, and could look up a phone number in a phone book and call a travel agent.  The travel agents then spent hours trying to build the right travel package. That meant time on the phone with hotels and time on the phone with airlines. Airlines like American head. To hire larger and larger call centers of humans to help find flights. We didn’t just read about Paris, we wanted to go. Wars had connected the world and now people wanted to visit the places they’d previously just seen in art books or read about in history books. But those call centers grew. A company like American Airlines couldn’t handle all of its ticketing needs and the story goes that the CEO was sitting beside a seller from IBM when they came up with the idea of a computerized reservation system. And so SABRE was born in the 1950s, when American  Airlines agreed to develop a real-time computing platform. Here, we see people calling in and pressing buttons to run commands on computers. The tones weren’t that different than a punch card, really. The system worked well enough for American that they decided to sell access to other firms. The computers used were based loosely after the IBM mainframes used in the SAGE missile air defense system. Here we see the commercial impacts of the AN/FSQ-7 the US government hired IBM to build as IBM added the transistorized options to the IBM 704 mainframe in 1955. That gave IBM the interactive computing technology that evolved into the 7000 series mainframes.  Now that IBM had the interactive technology, and a thorough study had been done to evaluate the costs and impacts of a new reservation system, American and IBM signed a contract to build the system in 1957. They went live to test reservation booking shortly thereafter. But it turns out there was a much bigger opportunity here. See, American and other airlines had paper processes to track how many people were on a flight and quickly find open seats for passengers, but it could take an hour or three to book tickets. This was fairly common before software ate the world. Everything from standing in line at the bank, booking dinner at a restaurant, reserving a rental car, booking hotel rooms, and the list goes on.  There were a lot of manual processes in the world - people weren’t just going to punch holes in a card to program their own flight and wait for some drum storage to tell them if there was an available seat. That was the plan American initially had in 1952 with the Magnetronic Reservisor. That never worked out. American had grown to one of the largest airlines and knew the perils and costs of developing software and hardware like this. Their system cost $40 million in 1950s money to build with IBM. They also knew that as other airlines grew to accommodate more people flying around the world, that the more flights, the longer that hour or three took. So they should of course sell the solution they built to other airlines.  Thus, parlaying the SAGE name, famous as a Cold War shield against the nuclear winter, Sabre Corporation began. It was fairly simple at first, with a pair of IBM 7090 mainframes that could take over 80,000 calls a day in 1960. Some travel agents weren’t fans of the new system, but those who embraced it found they could get more done in less time. Sabre sold reservation systems to airlines and soon expanded to become the largest data-processor in the world. Far better than the Reservisor would have been and now able to help bring the whole world into the age of jet airplane travel. That exploded to thousands of flights an hour in the 1960s and even turned over all booking to the computer. The system got busy and over the years IBM upgraded the computers to the S/360. They also began to lease systems to travel agencies in the 1970s after Max Hopper joined the company and began the plan to open up the platform as TWA had done with their PARS system. Then they went international, opened service bureaus in other cities (given that we once had to pay for a toll charge to call a number). And by the 1980s Sabre was how the travel agents booked flights. The 1980s brought easysabjre, so people could use their own computers to book flights and by then - and through to the modern era, a little over a third of all reservations are made on Sabre. By the mid-1980s, United had their own system called Apollo, Delta had one called Datas, and other airlines had their own as well. But SABRE could be made to be airline neutral. IBM had been involved with many American competitors, developing Deltamatic for Delta, PANAMAC for Pan Am, and other systems. But SABRE could be hooked to thee new online services for a whole new way to connect systems. One of these was CompuServe in 1980, then Prodigy’s GEnie and AOL as we turned the corner into the 1990s. Then they started a site called Travelocity in 1996 which was later sold to Expedia.  In the meantime, they got serious competition, which eventually led to a slew of acquisitions to remain compeititve. The competition included Amadeus, Galileo International, and Worldspan on provider in the Travelport GDS. The first of them originated from United Airlines, and by 1987 was joined by Aer Lingus, Air Portugal, Alitalia, British Airways, KLM, Olympic, Sabena, and Swissair to create Galileo, which was then merged with the Apollo reservation system. The technology was acquired through a company called Videcom International, which initially started developing reservation software in 1972, shortly after the Apollo and Datas services went online. They focused on travel agents and branched out into reservation systems of all sorts in the 1980s. As other systems arose they provided an aggregation to them by connecting to Amadeus, Galileo, and Worldspan. Amadeus was created in 1987 to be a neutral GDS after the issues with Sabre directing reservations to American Airlines. That was through a consortium of Air France, Iberia, Lufthansa, and SAS. They acquired the assets of the bankrupt System One and they eventually added other travel options including hotels, cars rentals, travel insurance, and other amenities. They went public in 1999 just before Sabre did and then were also taken private just before Sabre was.  Worldspan was created in 1990 and the result of merging or interconnecting the systems of  Delta, Northwest Airlines, and TWA, which was then acquired by Travelport in 2007. By then, SABRE had their own programming languages. While the original Sabre languages were written in assembly, they wrote their own language on top of C and C++ called SabreTalk and later transitioned to standard REST endpoints. They also weren’t a part of American any longer. There were too many problems with manipulating how flights were displayed to benefit American Airlines and they had to make a clean cut. Especially after Congress got involved in the 1980s and outlawed that type of bias for screen placement.  Now that they were a standalone company, Sabre went public then was taken private by private equity firms in 2007, and relisted on NASDAQ in 2014. Meanwhile, travel aggregators had figured out they could hook into the GDS systems and sell discount airfare without a percentage going to travel agents. Now that the GDS systems weren’t a part of the airlines, they were able to put downward pressure on prices. Hotwire, which used Sabre and a couple of other systems, and TripAdvisor, which booked travel through Sabre and Amadeus, were created in 2000 and Microsoft launched Expedia in 1996, which had done well enough to get spun off into its own public company by 2000. Travelocity operated inside Sabre until sold, and so the airlines put together a site of their own that they called Orbitz, which in 2001 was the biggest e-commerce site to have ever launched. And out of the bursting of the dot com bubble came online travel bookings. Kayak came in 2004 Sabre later sold Travelocity to Expedia, which uses Sabre to book travel. That allowed Sabre to focus on providing the back end travel technology. They now do over $4 billion in revenue in their industry. American Express had handled travel for decades but also added flights and hotels to their site, integrating with Sabre and Amadeus as well.  Here, we see a classic paradigm in play. First the airlines moved their travel bookings from paper filing systems to isolated computer systems - what we’d call mainframes today. The airlines then rethink the paradigm and aggregate other information into a single system, or a system intermixed with other data. In short, they enriched the data. Then we expose those as APIs to further remove human labor and put systems on assembly lines. Sites hook into those and the GDS systems, as with many aggregators, get spun off into their own companies. The aggregated information then benefits consumers (in this case travelers) with more options and cheaper fares. This helps counteract the centralization of the market where airlines acquire other airlines but in some way also cheapen the experience. Gone are the days when a travel agent guides us through our budgets and helps us build a killer itinerary. But in a way that just makes travel much more adventurous.           
3/16/202319 minutes, 16 seconds
Episode Artwork

The Story of Intel

We’ve talked about the history of microchips, transistors, and other chip makers. Today we’re going to talk about Intel in a little more detail.  Intel is short for Integrated Electronics. They were founded in 1968 by Robert Noyce and Gordon Moore. Noyce was an Iowa kid who went off to MIT to get a PhD in physics in 1953. He went off to join the Shockley Semiconductor Lab to join up with William Shockley who’d developed the transistor as a means of bringing a solid-state alternative to vacuum tubes in computers and amplifiers. Shockley became erratic after he won the Nobel Prize and 8 of the researchers left, now known as the “traitorous eight.”  Between them came over 60 companies, including Intel - but first they went on to create a new company called Fairchild Semiconductor where Noyce invented the monolithic integrated circuit in 1959, or a single chip that contains multiple transistors.  After 10 years at Fairchild, Noyce joined up with coworker and fellow traitor Gordon Moore. Moore had gotten his PhD in chemistry from Caltech and had made an observation while at Fairchild that the number of transistors, resistors, diodes, or capacitors in an integrated circuit was doubling every year and so coined Moore’s Law, that it would continue to to do so. They wanted to make semiconductor memory cheaper and more practical. They needed money to continue their research. Arthur Rock had helped them find a home at Fairchild when they left Shockley and helped them raise $2.5 million in backing in a couple of days.  The first day of the company, Andy Grove joined them from Fairchild. He’d fled the Hungarian revolution in the 50s and gotten a PhD in chemical engineering at the University of California, Berkeley. Then came Leslie Vadász, another Hungarian emigrant. Funding and money coming in from sales allowed them to hire some of the best in the business. People like Ted Hoff , Federico Faggin, and Stan Mazor. That first year they released 64-bit static random-access memory in the 3101 chip, doubling what was on the market as well as the 3301 read-only memory chip, and the 1101. Then DRAM, or dynamic random-access memory in the 1103 in 1970, which became the bestselling chip within the first couple of years. Armed with a lineup of chips and an explosion of companies that wanted to buy the chips, they went public within 2 years of being founded. 1971 saw Dov Frohman develop erasable programmable read-only memory, or EPROM, while working on a different problem. This meant they could reprogram chips using ultraviolet light and electricity. In 1971 they also created the Intel 4004 chip, which was started in 1969 when a calculator manufacturer out of Japan ask them to develop 12 different chips. Instead they made one that could do all of the tasks of the 12, outperforming the ENIAC from 1946 and so the era of the microprocessor was born. And instead of taking up a basement at a university lab, it took up an eight of an inch by a sixth of an inch to hold a whopping 2,300 transistors. The chip didn’t contribute a ton to the bottom line of the company, but they’d built the first true microprocessor, which would eventually be what they were known for. Instead they were making DRAM chips. But then came the 8008 in 1972, ushering in an 8-bit CPU. The memory chips were being used by other companies developing their own processors but they knew how and the Computer Terminal Corporation was looking to develop what was a trend for a hot minute, called programmable terminals. And given the doubling of speeds those gave way to microcomputers within just a few years. The Intel 8080 was a 2 MHz chip that became the basis of the Altair 8800, SOL-20, and IMSAI 8080. By then Motorola, Zilog, and MOS Technology were hot on their heals releasing the Z80 and 6802 processors. But Gary Kildall wrote CP/M, one of the first operating systems, initially for the 8080 prior to porting it to other chips. Sales had been good and Intel had been growing. By 1979 they saw the future was in chips and opened a new office in Haifa, Israiel, where they designed the 8088, which clocked in at 4.77 MHz. IBM chose this chip to be used in the original IBM Personal Computer. IBM was going to use an 8-bit chip, but the team at Microsoft talked them into going with the 16-bit 8088 and thus created the foundation of what would become the Wintel or Intel architecture, or x86, which would dominate the personal computer market for the next 40 years. One reason IBM trusted Intel is that they had proven to be innovators. They had effectively invented the integrated circuit, then the microprocessor, then coined Moore’s Law, and by 1980 had built a 15,000 person company capable of shipping product in large quantities. They were intentional about culture, looking for openness, distributed decision making, and trading off bureaucracy for figuring out cool stuff. That IBM decision to use that Intel chip is one of the most impactful in the entire history of personal computers. Based on Microsoft DOS and then Windows being able to run on the architecture, nearly every laptop and desktop would run on that original 8088/86 architecture. Based on the standards, Intel and Microsoft would both market that their products ran not only on those IBM PCs but also on any PC using the same architecture and so IBM’s hold on the computing world would slowly wither. On the back of all these chips, revenue shot past $1 billion for the first time in 1983. IBM bought 12 percent of the company in 1982 and thus gave them the Big Blue seal of approval, something important event today. And the hits kept on coming with the 286 to 486 chips coming along during the 1980s. Intel brought the 80286 to market and it was used in the IBM PC AT in 1984. This new chip brought new ways to manage addresses, the first that could do memory management, and the first Intel chip where we saw protected mode so we could get virtual memory and multi-tasking.  All of this was made possible with over a hundred thousand transistors. At the time the original Mac used a Motorola 68000 but the sales were sluggish while they flourished at IBM and slowly we saw the rise of the companies cloning the IBM architecture, like Compaq. Still using those Intel chips.  Jerry Sanders had actually left Fairchild a little before Noyce and Moore to found AMD and ended up cloning the instructions in the 80286, after entering into a technology exchange agreement with Intel. This led to AMD making the chips at volume and selling them on the open market. AMD would go on to fast-follow Intel for decades. The 80386 would go on to simply be known as the Intel 386, with over 275,000 transistors. It was launched in 1985, but we didn’t see a lot of companies use them until the early 1990s. The 486 came in 1989. Now we were up to a million transistors as well as a math coprocessor. We were 50 times faster than the 4004 that had come out less than 20 years earlier.  I don’t want to take anything away from the phenomenal run of research and development at Intel during this time but the chips and cores and amazing developments were on autopilot. The 80s also saw them invest half a billion in reinvigorating their manufacturing plants. With quality manufacturing allowing for a new era of printing chips, the 90s were just as good to Intel. I like to think of this as the Pentium decade with the first Pentium in 1993. 32-bit here we come. Revenues jumped 50 percent that year closing in on $9 billion. Intel had been running an advertising campaign around Intel Inside. This represented a shift from the IBM PC to the Intel. The Pentium Pro came in 1995 and we’d crossed 5 million transistors in each chip. And the brand equity was rising fast. More importantly, so was revenue. 1996 saw revenues pass $20 billion. The personal computer was showing up in homes and on desks across the world and most had Intel Inside - in fact we’d gone from Intel inside to Pentium Inside. 1997 brought us the Pentium II with over 7 million transistors, the Xeon came in 1998 for servers, and 1999 Pentium III. By 2000 they introduced the first gigahertz processor at Intel and they announced the next generation after Pentium: Itanium, finally moving the world to the 64 bit processor.  As processor speeds slowed they were able to bring multi-core processors and massive parallelism out of the hallowed halls of research and to the desktop computer in 2005. 2006 saw Intel go from just Windows to the Mac. And we got 45 nanometer logic technology in 2006 using hafnium-based high-k for transistor gates represented a shift from the silicon-gated transistors of the 60s and allowed them to move to hundreds of millions of transistors packed into a single chip. i3, i5, i7, an on. The chips now have over a couple hundred million transistors per core with 8 cores on a chip potentially putting us over 1.7 or 1.8 transistors per chip. Microsoft, IBM, Apple, and so many others went through huge growth and sales jumps then retreated dealing with how to run a company of the size they suddenly became. This led each to invest heavily into ending a lost decade effectively with R&D - like when IBM built the S/360 or Apple developed the iMac and then iPod. Intel’s strategy had been research and development. Build amazing products and they sold. Bigger, faster, better. The focus had been on power. But mobile devices were starting to take the market by storm. And the ARM chip was more popular on those because with a reduced set of instructions they could use less power and be a bit more versatile.  Intel coined Moore’s Law. They know that if they don’t find ways to pack more and more transistors into smaller and smaller spaces then someone else will. And while they haven’t been huge in the RISC-based System on a Chip space, they do continue to release new products and look for the right product-market fit. Just like they did when they went from more DRAM and SRAM to producing the types of chips that made them into a powerhouse. And on the back of a steadily rising revenue stream that’s now over $77 billion they seem poised to be able to whether any storm. Not only on the back of R&D but also some of the best manufacturing in the industry.  Chips today are so powerful and small and contain the whole computer from the era of those Pentiums. Just as that 4004 chip contained a whole ENIAC. This gives us a nearly limitless canvas to design software. Machine learning on a SoC expands the reach of what that software can process. Technology is moving so fast in part because of the amazing work done at places like Intel, AMD, and ARM. Maybe that positronic brain that Asimov promised us isn’t as far off as it seems. But then, I thought that in the 90s as well so I guess we’ll see.        
3/7/202316 minutes, 51 seconds
Episode Artwork

AI Hype Cycles And Winters On The Way To ChatGPT

Carlota Perez is a researcher who has studied hype cycles for much of her career. She’s affiliated with the University College London, the University of Sussex, The Tallinn University of Technology in Astonia and has worked with some influential organizations around technology and innovation. As a neo-Schumpeterian, she sees technology as a cornerstone of innovation. Her book Technological Revolutions and Financial Capital is a must-read for anyone who works in an industry that includes any of those four words, including revolutionaries.  Connecticut-based Gartner Research was founded by GideonGartner in 1979. He emigrated to the United States from Tel Aviv at three years old in 1938 and graduated in the 1956 class from MIT, where he got his Master’s at the Sloan School of Management. He went on to work at the software company System Development Corporation (SDC), the US military defense industry, and IBM over the next 13 years before starting his first company. After that failed, he moved into analysis work and quickly became known as a top mind in the technology industry analysts. He often bucked the trends to pick winners and made banks, funds, and investors lots of money. He was able to parlay that into founding the Gartner Group in 1979.  Gartner hired senior people in different industry segments to aid in competitive intelligence, industry research, and of course, to help Wall Street. They wrote reports on industries, dove deeply into new technologies, and got to understand what we now call hype cycles in the ensuing decades. They now boast a few billion dollars in revenue per year and serve well over 10,000 customers in more than 100 countries.  Gartner has developed a number of tools to make it easier to take in the types of analysis they create. One is a Magic Quadrant, reports that identify leaders in categories of companies by a vision (or a completeness of vision to be more specific) and the ability to execute, which includes things like go-to-market activities, support, etc. They lump companies into a standard four-box as Leaders, Challengers, Visionaries, and Niche Players. There’s certainly an observer effect and those they put in the top right of their four box often enjoy added growth as companies want to be with the most visionary and best when picking a tool. Another of Gartner’s graphical design patterns to display technology advances is what they call the “hype cycle”. The hype cycle simplifies research from career academics like Perez into five phases.  * The first is the technology trigger, which is when a breakthrough is found and PoCs, or proof-of-concepts begin to emerge in the world that get press interested in the new technology. Sometimes the new technology isn’t even usable, but shows promise.  * The second is the Peak of Inflated Expectations, when the press picks up the story and companies are born, capital invested, and a large number of projects around the new techology fail. * The third is the Trough of Disillusionment, where interest falls off after those failures. Some companies suceeded and can show real productivity, and they continue to get investment. * The fourth is the Slope of Enlightenment, where the go-to-market activities of the surviving companies (or even a new generation) begin to have real productivity gains. Every company or IT department now runs a pilot and expectations are lower, but now achievable. * The fifth is the Plateau of Productivity, when those pilots become deployments and purchase orders. The mainstream industries embrace the new technology and case studies prove the promised productivity increases. Provided there’s enough market, companies now find success. There are issues with the hype cycle. Not all technologies will follow the cycle. The Gartner approach focuses on financials and productivity rather than true adoption. It involves a lot of guesswork around subjective, synthetical, and often unsystematic research. There’s also the ever-resent observer effect. However, more often than not, the hype is seperated from the tech that can give organizations (and sometimes all of humanity) real productivity gains. Further, the term cycle denotes a series of events when it should in fact be cyclical as out of the end of the fifth phase a new cycle is born, or even a set of cycles if industries grow enough to diverge. ChatGPT is all over the news feeds these days, igniting yet another cycle in the cycles of AI hype that have been prevalent since the 1950s. The concept of computer intelligence dates back to the 1942 with Alan Turing and Isaac Asimov with “Runaround” where the three laws of robotics initially emerged from. By 1952 computers could play themselves in checkers and by 1955, Arthur Samuel had written a heuristic learning algorthm he called “temporal-difference learning” to play Chess. Academics around the world worked on similar projects and by 1956 John McCarthy introduced the term “artificial intelligence” when he gathered some of the top minds in the field together for the McCarthy workshop. They tinkered and a generation of researchers began to join them. By 1964, Joseph Weizenbaum’s "ELIZA" debuted. ELIZA was a computer program that used early forms of natural language processing to run what they called a “DOCTOR” script that acted as a psychotherapist.  ELIZA was one of a few technologies that triggered the media to pick up AI in the second stage of the hype cycle. Others came into the industry and expectations soared, now predictably followed by dilsillusionment. Weizenbaum wrote a book called Computer Power and Human Reason: From Judgment to Calculation in 1976, in response to the critiques and some of the early successes were able to then go to wider markets as the fourth phase of the hype cycle began. ELIZA was seen by people who worked on similar software, including some games, for Apple, Atari, and Commodore.  Still, in the aftermath of ELIZA, the machine translation movement in AI had failed in the eyes of those who funded the attempts because going further required more than some fancy case statements. Another similar movement called connectionism, or mostly node-based artificial neural networks is widely seen as the impetus to deep learning. David Hunter Hubel and Torsten Nils Wiesel focused on the idea of convultional neural networks in human vision, which culminated in a 1968 paper called  "Receptive fields and functional architecture of monkey striate cortex.” That built on the original deep learning paper from Frank Rosenblatt of Cornell University called "Principles of Neurodynamics: Perceptrons and the Theory of Brain Mechanisms" in 1962 and work done behind the iron curtain by Alexey Ivakhnenko on learning algorithms in 1967. After early successes, though, connectionism - which when paired with machine learning would be called deep learning when Rina Dechter coined the term in 1986, went through a similar trough of disillusionment that kicked off in 1970. Funding for these projects shot up after the early successes and petered out ofter there wasn’t much to show for them. Some had so much promise that former presidents can be seen in old photographs going through the models with the statiticians who were moving into computing. But organizations like DARPA would pull back funding, as seen with their speech recognition projects with Cargegie Mellon University in the early 1970s.  These hype cycles weren’t just seen in the United States. The British applied mathemetician James Lighthill wrote a report for the British Science Research Council, which was published in 1973. The paper was called “Artificial Intelligence: A General Survey” and analyzed the progress made based on the amount of money spent on artificial intelligence programs. He found none of the research had resulted in any “major impact” in fields that the academics had undertaken. Much of the work had been done at the University of Edinbourgh and funding was drastically cut, based on his findings, for AI research around the UK. Turing, Von Neumann, McCarthy, and others had either intentially or not, set an expectation that became a check the academic research community just couldn’t cash. For example, the New York Times claimed Rosenblatt’s perceptron would let the US Navy build computers that could “walk, talk, see, write, reproduce itself, and be conscious of its existence” in the 1950s - a goal not likely to be achieved in the near future even seventy years later. Funding was cut in the US, the UK, and even in the USSR, or Union of the Soviet Socialist Republic. Yet many persisted. Languages like Lisp had become common in the late 1970s, after engineers like Richard Greenblatt helped to make McCarthy’s ideas for computer languages a reality. The MIT AI Lab developed a Lisp Machine Project and as AI work was picked up at other schools like Stanford began to look for ways to buy commercially built computers ideal to be Lisp Machines. After the post-war spending, the idea that AI could become a more commercial endeavor was attractive to many. But after plenty of hype, the Lisp machine market never materialized. The next hype cycle had begun in 1983 when the US Department of Defense pumped a billion dollars into AI, but that spending was cancelled in 1987, just after the collapse of the Lisp machine market. Another AI winter was about to begin. Another trend that began in the 1950s but picked up steam in the 1980s was expert systems. These attempt to emulate the ways that humans make decisions. Some of this work came out of the Stanford Heuristic Programming Project, pioneered by Edward Feigenbaum. Some commercial companies took the mantle and after running into barriers with CPUs, by the 1980s those got fast enough. There were inflated expectations after great papers like Richard Karp’s “Reducibility among Combinatorial Problems” out of UC Berkeley in 1972. Countries like Japan dumped hundreds of millions of dollars (or yen) into projects like “Fifth Generation Computer Systems” in 1982, a 10 year project to build up massively parallel computing systems. IBM spent around the same amount on their own projects. However, while these types of projects helped to improve computing, they didn’t live up to the expectations and by the early 1990s funding was cut following commercial failures. By the mid-2000s, some of the researchers in AI began to use new terms, after generations of artificial intelligence projects led to subsequent AI winters. Yet research continued on, with varying degrees of funding. Organizations like DARPA began to use challenges rather than funding large projects in some cases. Over time, successes were found yet again. Google Translate, Google Image Search, IBM’s Watson, AWS options for AI/ML, home voice assistants, and various machine learning projects in the open source world led to the start of yet another AI spring in the early 2010s. New chips have built-in machine learning cores and programming languages have frameworks and new technologies like Jupyter notebooks to help organize and train data sets. By 2006, academic works and open source projects had hit a turning point, this time quietly. The Association of Computer Linguistics was founded in 1962, initially as the Association for Machine Translation and Computational Linguistics (AMTCL). As with the ACM, they have a number of special interest groups that include natural language learning, machine translation, typology, natural language generation, and the list goes on. The 2006 proceedings on the Workshop of Statistical Machine Translation began a series of dozens of workshops attended by hundreds of papers and presenters. The academic work was then able to be consumed by all, inlcuding contributions to achieve English-to-German and Frnech tasks from 2014. Deep learning models spread and become more accessible - democratic if you will. RNNs, CNNs, DNNs, GANs.  Training data sets was still one of the most human intensive and slow aspects of machine learning. GANs, or Generative Adversarial Networks were one of those machine learning frameworks, initially designed by Ian Goodfellow and others in 2014. GANs use zero-sum game techniques from game theory to generate new data sets - a genrative model. This allowed for more unsupervised training of data. Now it was possible to get further, faster with AI.  This brings us into the current hype cycle. ChatGPT was launched in November of 2022 by OpenAI. OpenAI was founded as a non-profit in 2015 by Sam Altman (former cofounder of location-based social network app Loopt and former president of Y Combinator) and a cast of veritable all-stars in the startup world that included:  * Reid Hoffman, former Paypal COO, LinkedIn founder and venture capitalist. * Peter Thiel, former cofounder of Paypal and Palantir, as well as one of the top investors in Silicon Valley. * Jessica Livingston, founding partner at Y Combinator. * Greg Brockman, an AI researcher who had worked on projects at MIT and Harvard OpenAI spent the next few years as a non-profit and worked on GPT, or Generative Pre-trained Transformer autoregression models. GPT uses deep learning models to process human text and produce text that’s more human than previous models. Not only is it capable of natural language processing but the generative pre-training of models has allowed it to take a lot of unlabeled text so people don’t have to hand label weights, thus automated fine tuning of results. OpenAI dumped millions into public betas by 2016 and were ready to build products to take to market by 2019. That’s when they switched from a non-profit to a for-profit. Microsoft pumped $1 billion into the company and they released DALL-E to produce generative images, which helped lead to a new generation of applications that could produce artwork on the fly. Then they released ChatGPT towards the end of 2022, which led to more media coverage and prognostication of world-changing technological breakthrough than most other hype cycles for any industry in recent memory. This, with GPT-4 to be released later in 2023. ChatGPT is most interesting through the lens of the hype cycle. There have been plenty of peaks and plateaus and valleys in artificial intelligence over the last 7+ decades. Most have been hyped up in the hallowed halls of academia and defense research. ChatGPT has hit mainstream media. The AI winter following each seems to be based on the reach of audience and depth of expectations. Science fiction continues to conflate expectations. Early prototypes that make it seem as though science fiction will be in our hands in a matter of weeks lead media to conjecture. The reckoning could be substantial. Meanwhile, projects like TinyML - with smaller potential impacts for each use but wider use cases, could become the real benefit to humanity beyond research, when it comes to everyday productivity gains. The moral of this story is as old as time. Control expectations. Undersell and overdeliver. That doesn’t lead to massive valuations pumped up by hype cycles. Many CEOs and CFOs know that a jump in profits doesn’t always mean the increase will continue. Some intentially slow expectations in their quarterly reports and calls with analysts. Those are the smart ones.
2/22/202323 minutes, 37 seconds
Episode Artwork

Hackers and Chinese Food: Origins of a Love Affair

Research into the history of computers sometimes leads down some interesting alleys - or wormholes even. My family would always go out to eat Chinese food, or pick it up, on New Year’s day. None of the one Chinese restaurants in the area actually closed, so it just made sense. The Christmas leftovers were gone by then and no one really wanted to cook. My dad mentioned there were no Chinese restaurants in our area in the 1970s - so it was a relatively new entrant to the cuisine of my North Georgia town. Whether it’s the Tech Model Railroad or hobbyists from Cambridge, stories abound of young engineers debating the merits of this programming technique or chipset or that. So much so that while reading Steven Levy’s Hackers or Tom Lean’s Electronic Dreams, I couldn’t help but hop on Door Dash and order up some yummy fried rice. Then I started to wonder, why this obsession?  For one, many of these hackers didn’t have a ton of money. Chinese food was quick and cheap. The restaurants were often family-owned and small. There were higher end restaurants but concepts like P.F. Chang’s hadn’t sprung up yet. That wouldn’t come until 1993. Another reason it was cheap is that many of the proprietors of the restaurants were recent immigrants. Some were from Hunan, others from Taipei or Sichuan, Shanghai, or Peking (the Romanized name for Beijing). Chinese immigrants began to flow into the United States during the Gold Rush of California in the late 1840s and early 1850s.  The Qing Empire had been at its height at the end of the 1700s and China ruled over a third of humans in the world. Not only that - it was one of the top economies in the world. But rapid growth in population meant less farmland for everyone - less jobs to go around. Poverty spread, just as colonial powers began to pick away at parts of the empire. Britain had banned the slave trade in 1807 and Chinese laborers had been used to replace the slaves. The use of opium spread throughout the colonies and with the laborers, back into China. The Chinese tried to ban the opium trade and seized opium in Canton. The British had better ships, better guns, and when the First Opium War broke out, China was forced to give up Hong Kong to the British in 1842, which began what some historians refer to as a century of humiliation while China gave up land until they were able to modernize. Hong Kong became a British colony under Queen Victoria and the Victorian obsession with China grew. Art, silks (as with the Romans), vases, and anything the British could get their hands on flowed through Hong Kong. Then came the Taiping Rebellion, which lasted from 1851 to 1864. A Christian was named theocrat and China was forced to wage a war internally with around 20 million people dying and scores more being displaced. The scent of an empire in decay was in the air. Set against a backdrop of more rebellions, the Chinese army was weakened to the point that during the First Sino-Japanese War in 1894, and more intervention from colonial powers. By 1900, the anti-colonial and anti-Christian Boxer Uprising saw missionaries slaughtered and foreigners expelled. Great powers of the day sent ships and troops to retrieve their peoples and soon declared war on the empire and seized Beijing. This was all expensive, led to reparations, a prohibition on importing arms, razing of forts, and more foreign powers occupying areas of China. The United States put over $10 million of its take from the Boxer Indemnity as they called it, to help support Chinese students who came to the United States. The Qing court had lost control and by 1911 the Wuchang Uprising began and by 1912 2,000 years of Chinese dynasties was over with the Republic of China founded in 1912, and internal conflicts for power continuing until Mao Zedong and his followers finally seized power, established the People’s Republic of China as a communist nation, and cleansed the country of detractors during what they called the Great Leap Forward, resulting in 45 million dead. China itself was diplomatically disconnected with the United States at the time, who had backed the government now in exile in the capital city of Taiwan, Taipei - or the Republic of China as they were called during the Civil War.  The food, though. Chinese food began to come into the United States during the Gold Rush. Cantonese merchants flowed into the sparkling bay of San Francisco, and emigrants could find jobs mining, laying railroad tracks, and in agriculture. Hard work means you get real hungry, and they cooked food like they had at home. China had a better restaurant and open market cooking industry than the US at the time (and arguably still does). Some of he Chinese who settled in San Francisco started restaurants - many better than those run by Americans. The first known restaurant owned by a Chinese proprietor was Canton Restaurant in 1849. As San Francisco grew, so grew the Chinese food industry.  Every group of immigrants faces xenophobia or racism. The use of the Chinese laborers had led to laws in England that attempted to limit their use. In some cases they were subjugated into labor. The Chinese immigrants came into the California Gold Rush and many stayed. More restaurants were opened and some catered to white people more than the Chinese. The Transcontinental Railroad was completed in 1869 and tourists began to visit San Francisco from the east. China Towns began to spring up in other major cities across the United States. Restaurants, laundries, and other even eastern pharmacies. New people bring new ways and economies go up and down. Prejudice reared its ugly head. There was an economic recession in the 1870s. There were fears that the Chinese were taking jobs, causing wages to go down, and crime. Anti-Chinese sentiment became law in the Chinese Exclusion Act in 1882, which halted immigration into the US. That would be repealed in 1943. Conservative approaches to immigration did nothing to limit the growing appeal of Chinese food in the United States. Merchants, like those who owned Chinese restaurants, could get special visas. They could bring relatives and workers. Early Chinese restaurants had been called “chow chow houses” and by the early 1900s there were new Chop Suey restaurants in big cities, that were affordable. Chop Suey basically means “odds and ends” and most of the dishes were heavily westernized but still interesting and delicious. The food was fried in ways it hadn’t been in China, and sweeter. Ideas from other asian nations also began to come in, like fortune cookies, initially from Japan. Americans began to return home from World War II in the late 1940s. Many had experienced new culinary traditions in lands they visited. Initially Cantonese-inspired, more people flowed in from other parts of China like Taiwan and they brought food inspired from their native lands. Areas like New York and San Francisco got higher end restaurants. Once the Chinese Exclusion Act was repealed, plenty of immigrants fled wars and cleansing in China. Meanwhile, Americans embraced access to different types of foods - like Italian, Chinese, and fast food. Food became a part of the national identity. Further, new ways to preserve food became possible as people got freezers and canneries helped spread foods - like pasta sauce.  This was the era of the spread of Spam and other types of early processed foods. The military helped spread the practice - as did Jen Paulucci, who bought Chun King Corporation in 1947. The Great Depression proved there needed to be new ways to distribute foods. Some capitalized on that. 4,000+ Chinese restaurants in the US in the 1940s meant there were plenty of companies to buy those goods rather than make them fresh. Chop Suey, possibly created by the early Chinese migrants. A new influx of immigrants would have new opportunities to diversify the American pallate.  The 1960s saw an increase in legislation to protect human rights. Amidst the civil rights movement, the Hart-Celler Act of 1965 stopped the long-standing practice of controlling immigration effectively by color. The post-war years saw shifting borders and wars throughout the world - especially in Eastern Europe and Asia. The Marshal Plan helped rebuild the parts of Asia that weren’t communist, and opened the ability for more diverse people to move to the US. Many that we’ve covered went into computing and helped develop a number of aspects of computing. They didn’t just come from China - they came from Russia, Poland, India, Japan, Korea, Vietnam, Thailand, and throughout. Their food came with them. This is the world the Hackers that Steven Levy described lived in. The first Chinese restaurant opened in London in 1907 and as well when people who lived in Hong Kong moved to the UK, especially after World War II. That number of Chinese restaurants in the US grew to tens of thousands in the decades since Richard Nixon visited Beijing in 1972 to open relations back up with China. But the impact at the time was substantial, even on technologists. It wasn’t just those hackers from MIT that loved their Chinese food, but those in Cambridge as well in the 1980s, who partook in a more Americanized Chinese cuisine, like “Chow mein” - which loosely translates from “fried noodles” and emerged in the US in the early 1900s.  Not all dishes have such simple origins to track down. Egg rolls emerged in the 1930s, a twist on the more traditional Chinese sprint roll. Ding Baozhen, a governor of the Sichuan province in the Qing Dynasty, discovered a spicy marinated chicken dish in the mid-1800s that spread quickly. He was the Palace Guardian, or Kung Pao, as the dish is still known. Zuo Zongtang, better known as General Tso, was a Qing Dynasty statesman and military commander who helped put down the Taiping Rebellion in the later half of the 1800s. Chef Peng Chang-kuei escaped communist China to Taiwan, where he developed General Tso’s chicken and named it after the war hero. It came to New York in the 1970s. Sweet and Sour pork also got its start in the Qing era, in 18th century Cantonese cuisine and spread to the US with the Gold Rush. Some dishes are far older. Steamed dumplings were popular from Afghanistan to Japan and go back to the Han Dynasty - possibly invented by the Chinese doctor Zhang Zhongjing in the centuries before or after the turn of the millennia. Peking duck is far older, getting its start in 1300s Ming Dynasty, or Yuan - but close to Shanghai. Otto Reichardt brought the ducks to San Francisco to be served in restaurants in 1901. Chinese diplomats helped popularize the dish in the 1940s as some of their staffs stayed in the US and the dish exploded in popularity in the 1970s - especially after Nixon’s trip to China, which included a televised meal on Tiananmen Square where he and Henry Kissinger ate the dish.   There are countless stories of Chinese-born immigrants bringing their food to the world. Some are emblematic of larger population shifts globally. Cecilia Chiang grew up in Shanghai until Japan invaded, when she and her sister fled to Chengdu, only to flee the Chinese Communists and emigrate to the US in 1959. She opened The Mandarin in 1960 in San Francisco and a second location in 1967. It was an upscale restaurant and introduced a number of new dishes to the US from China. She went on to serve everyone from John Lennon to Julia Child - and her son Philip replaced her in 1989 before starting a more mainstream chain of restaurants he called P.F. Chang’s in 1993. The American dream, as it had come to be known. Plenty of other immigrants from countries around the world were met with open arms. Chemists, biologists, inventors, spies, mathematicians, doctors, physicists, and yes, computer scientists. And of course, chefs. Diversity of thought, diversity of ideas, and diversity-driven innovation can only come from diverse peoples. The hackers innovated over their Americanized versions of Chinese food - many making use of technology developed by immigrants from China, their children, or those who came from other nations. Just as those from nearly every industry did.
12/30/202219 minutes, 37 seconds
Episode Artwork

The Silk Roads: Then And Now...

The Silk Road, or roads more appropriately, has been in use for thousands of years. Horses, jade, gold, and of course silk flowed across the trade routes. As did spices - and knowledge. The term Silk Road was coined by a German geographer named Ferdinand van Richthofen in 1870 to describe a network of routes that was somewhat formalized in the second century that some theorize date back 3000 years, given that silk has been found on Egyptian mummies from that time - or further. The use of silk itself in China in fact dates back perhaps 8,500 years. Chinese silk has been found in Scythian graves, ancient Germanic graves, and along mountain ranges and waterways around modern India gold and silk flowed between east and west. These gave way to empires along the Carpathian Mountains or Kansu Corridor. There were Assyrian outposts in modern Iran and the Sogdia built cities around modern Samarkand in Uzbekistan, an area that has been inhabited since the 4th millennium BCE. The Sogdians developed trading networks that spanned over 1,500 miles - into ancient China. The road expanded with he Persian Royal Road from the 5th century BCE across Turkey and with the conquests of Alexander the Great in the 300s BCE, the Macedonian Empire pushed into Central Asia into modern Uzbekistan. The satrap Diodotus I claimed independence of one of those areas between the Hindu Kush, Pamirs, and Tengri Tagh mountains, which became known as the Hellenized name Bactria and called the Greco-Bactrian and then Into-Greek Kingdoms by history. Their culture also dates back thousands of years further.  The Bactrians became powerful enough to push into the Indus Valley, west along the Caspian Sea, and north to the Syr Darya river, known as the Jaxartes at the time and to the Aral Sea. They also pushed south into modern Pakistan and Afghanistan, and east to modern Kyrgyzstan. To cross the Silk Road was to cross through Bactria, and they were considered a Greek empire in the east. The Han Chinese called them Daxia in the third century BCE. They grew so wealthy from the trade that they became the target of conquest by neighboring peoples once the thirst for silk could not be unquenched in the Roman Empire. The Romans consumed so much silk that silver reserves were worn thin and they regulated how silk could be used - something some of the Muslim’s would do over the next generations.  Meanwhile, the Chinese hadn’t known where their silk was destined, but had been astute enough to limit who knew how silk was produced. The Chinese general Pan Chao in the first century AD and attempted to make contact with the Roman’s only to be thwarted by Parthians, who acted as the middlemen on many a trade route. It wasn’t until the Romans pushed East enough to control the Persian Gulf that an envoy was sent by Marcus Aurelius that made direct contact with China in 166 AD and from there, spread throughout the kingdom. Justinian even sent monks to bring home silkworm eggs but they were never able to reproduce silk, in part because they didn’t have mulberry trees. Yet, the west had perpetrated industrial espionage on the east, a practice that would be repeated in 1712 when a Jesuit priest found how the Chinese created porcelain.  The Silk Road was a place where great fortunes could be found or lost. The Dread Pirate Roberts was a character from a movie called the Princess Bride, who had left home to make his fortune, so he could spend his life with his love, Buttercup. The Silk Road had made many a fortune, so Ross Ulbricht used that name on a site he created called the Silk Road, along with Frosty and Attoid. He’d gotten his Bachelors at the University of Texas and Masters at Penn State University before he got the idea to start a website he called the Silk Road in 2011. Most people connected to the site via ToR and paid for items in bitcoins. After he graduated from Penn State, he’d started a couple of companies that didn’t do that well. Given the success of Amazon, he and a friend started a site to sell used books, but Ulbricht realized it was more profitable to be the middle man, as the Parthians had thousands of years earlier. The new site would be Underground Brokers and later changed to The Silk Road. Cryptocurrencies allowed for anonymous transactions. He got some help from others, including two that went by the pseudonyms Smedley (later suspected to be Mike Wattier) and Variety Jones (later suspected to be Thomas Clark). They started to facilitate transactions in 2011. Business was good almost from the beginning. Then Gawker published an article about the site and more and more attention was paid to what was sold through this new darknet portal. The United States Department of Justice and other law enforcement agencies got involved. When bitcoins traded at less than $80 each, the United States Drug Enforcement Agency (DEA) seized 11 bitcoins, but couldn’t take the site down for good. It was actually an IRS investigator named Gary Alford who broke the case when he found the link between the Dread Pirate Roberts and Attoid and then a post that included Ulbricht’s name and phone number. Ulbricht was picked up in San Francisco and 26,000 bitcoins were seized, along with another 144,000 from Ulbricht’s personal wallets. Two federal agents were arrested when it was found they traded information about the investigation to Ulbricht. Ulbricht was also accused of murder for hire, but those charges never led to much. Ulbricht now servers a life sentence. The Silk Road of the darknet didn’t sell silk. 70% of the 10,000 things sold were drugs. There were also fake identities, child pornography, and through a second site, firearms. There were scammers. Tens of millions of dollars flowed over this new Silk Road. But the secrets weren’t guarded well enough and a Silk Road 2 was created in 2013, which only lasted a year. Others come and go. It’s kinda’ like playing whack-a-mole. The world is a big place and the reach of law enforcement agencies limited, thus the harsh sentence for Ulbricht.
10/28/202210 minutes, 7 seconds
Episode Artwork

Simulmatics: Simulating Advertising, Data, Democracy, and War in the 1960s

Dassler shoes was started by Adolf Dassler in 1924 in Germany, after he came home from World War I. His brother Rudolph joined him. They made athletic shoes and developed spikes to go on the bottom of the shoes. By 1936, they convinced Jesse Owens to wear their shoes on the way to his gold medals. Some of the American troops who liked the shoes during World War II helped spread the word. The brothers had a falling out soon after the war was over. Adolph founded Adidas while Rudolph created a rival shoe company called Puma. This was just in time for the advertising industry to convince people that if they bought athletic shoes that they would instantly be, er, athletic. The two companies became a part of an ad-driven identity that persists to this day. One most who buy the products advertised hardly understand themselves. A national identity involves concentric circles of understanding. The larger a nation, the more concentric circles and the harder it is to nail down exactly who has what identity. Part of this is that people spend less time thinking about who they are and more time being told who they should want to be like. Woven into the message of who a person should be is a bunch of products that a person has to buy to become the ideal. That’s called advertising.  James White founded the first modern advertising agency called ‘R. F. White & Son' in Warwick Square, London in 1800. The industry evolved over the next hundred or so years as more plentiful supplies led to competition and so more of a need to advertise goods. Increasingly popular newspapers from better printing presses turned out a great place to advertise. The growth of industrialism meant there were plenty of goods and so competition between those who manufactured or trafficked those goods. The more efficient the machines of industry became, the more the advertising industry helped sell what the world might not yet know it needed. Many of those agencies settled into Madison Avenue in New York as balances of global power shifted and so by the end of World War II, Madison Avenue became a synonym for advertising. Many now-iconic brands were born in this era. Manufacturers and distributors weren’t the only ones to use advertising. People put out ads to find loves in personals and by the 1950s advertising even began to find its way into politics. Iconic politicians could be created.  Dwight D Eisenhower served as the United States president from 1953 to 1961. He oversaw the liberation of Northern Africa in World War II, before he took command to plan the invasion of Normandy on D Day. He was almost universally held as a war hero in the United States. He had not held public office but the ad men of Madison Avenue were able to craft messages that put him into the White House. Messages like “I like Ike.” These were the early days of television and the early days of computers. A UNIVAC was able to predict that Eisenhower would defeat Adlai Stevenson in a landslide election in 1952. The country was not “Madly for Adlai” as his slogan went.  ENIAC had first been used in 1945. MIT Whirlwind was created in 1951, and the age of interactive computing was upon us. Not only could a computer predict who might win an election but new options in data processing allowed for more granular ways to analyze data. A young Senator named John F. Kennedy was heralded as a “new candidate for the 1960s.” Just a few years later Stephenson had lambasted Ike for using advertising, but this new generation was willing to let computers help build a platform - just as the advertisers were starting to use computers to help them figure out the best way to market a product. It turns out that words mattered. At the beginning of that 1960 election, many observed they couldn’t tell much difference between the two candidates: Richard Nixon and John Kennedy. Kennedy’s democrats were still largely factored between those who believed in philosophies dating back to the New Deal and segregationists. Ike presided over the early days of the post-World War II new world order. This new generation, like new generations before and since, was different. They seemed to embrace the new digital era. Someone like JFK wasn’t punching cards and feeding them into a computer, writing algorithms, or out surveying people to collect that data. That was done by a company that was founded in 1959 called Simulmatics. Jill Lepore called them the What If men in her book called If/Then - a great read that goes further into the politics of the day. It’s a fascinating read. The founder of the company was a Madison Avenue ad man named Ed Greenfield. He surrounded himself with a cast of characters that included people from John Hopkins University, MIT, Yale, and IBM.  Ithiel de Sola Pool had studied Nazi and Soviet propaganda during World War II. He picked up on work from Hungarian Frigyes Karinthy and with students ran Monte Carlo simulations on people’s acquaintances to formulate what would later become The Small World Problem or the Six Degrees of Separation, a later inspiration for the social network of the same name and even later, for Facebook. The social sciences had become digital. Political science could then be used to get at the very issues that could separate Kennedy from Nixon. The People Machine as one called it was a computer simulation, thus the name of the company. It would analyze voting behaviors. The previous Democratic candidate Stevenson had long-winded, complex speeches. They analyzed the electorate and found that “I Like Ike” resonated with more people. It had, after all, been developed by the same ad man who came up with “Melts in your mouth, not in your hands” for M&Ms. They called the project Project Microscope. They recruited some of the best liberal minds in political science and computer science. They split the electorate into 480 groups. A big focus was how to win the African-American vote. Turns out Gallup polls didn’t study that vote because Southern newspapers had blocked doing so. Civil rights, and race relations in general wasn’t unlike a few other issues. There was anti-Catholic, anti-Jew, and anti-a lot. The Republicans were the party of Lincoln and had gotten a lot of votes over the last hundred years for that. But factions within the party had shifted. Loyalties were shifting. Kennedy was a Catholic but many had cautioned he should down-play that issue. The computer predicted civil rights and anti-Catholic bigotry would help him, which became Kennedy’s platform. He stood for what was right but were they his positions or just what the nerds thought? He gained votes at the last minute. Turns out the other disenfranchised groups saw the bigotry against one group as akin to bigotry against their own; just like the computers thought they would. Kennedy became an anti-segregationist, as that would help win the Black vote in some large population centers. It was the most aggressive, or liberal, civil-rights plank the Democrats had ever taken up.  Civil rights are human rights. Catholic rights are as well. Kennedy offered the role of Vice President to Lyndon B Johnson, the Senate Majority Leader and was nominated to the Democratic candidate. Project Microscope from Simulmatics was hired in part to shore up Jewish and African-American votes. They said Kennedy should turn the fact that he was a Catholic into a strength. Use the fact he was Catholic to give up a few votes here and there in the South but pick up other votes. He also took the Simulmatics information as it came out of the IBM 704 mainframe to shore up his stance on other issues. That confidence helped him out-perform Nixon in televised debates. They used teletypes and even had the kids rooms converted into temporary data rooms. CBS predicted Nixon would win. Less than an hour later they predicted Kennedy would win. Kennedy won the popular vote by .1 percent of the country even after two recounts. The Black vote hat turned out big for Kennedy. News leaked about the work Simulmatics had done for Kennedy. Some knew that IBM had helped Hitler track Jews as has been written about in the book IBM and the Holocaust by Edwin Black. Others still had issues with advertising in campaigns and couldn’t fathom computers. Despite Stalin’s disgust for computers some compared the use of computers to Stalinistic propaganda. Yet it worked - even if in retrospect the findings were all things we could all take for granted. They weren’t yet. The Kennedy campaign at first denied the “use of an electronic brain and yet their reports live on in the Kennedy Library. A movement against the use of the computer seemed to die after Kennedy was assassinated.  Books of fiction persisted, like The 480 from Eugene Burdick, which got its title from the number of groups Simulmatics used. The company went on to experiment with every potential market their computer simulation could be used in. The most obvious was the advertising industry. But many of those companies went on to buy their own computers. They already had what many now know is the most important aspect of any data analytics project: the data. Sometimes they had decades of buying data - and could start over on more modern computers. They worked with the Times to analyze election results in 1962, to try and catch newspapers up with television. The project was a failure and newspapers leaned into more commentary and longer-term analysis to remain a relevant supplier of news in a world of real-time television. They applied their brand of statistics to help simulate the economy of Venezuela in a project called Project Camelot, which LBJ later shot down.  Their most profitable venture became working with the defense department to do research in Vietnam. They collected data, analyzed data, punched data into cards, and fed it into computers. Pool was unabashedly pro-US and it’s arguable that they saw what they wanted to see. So did the war planners in the pentagon, who followed Robert McNamara. McNamara had been one of the Quiz Kids who turned around the Ford Motor Company with a new brand of data-driven management to analyze trends in the car industry, shore up supply chains, and out-innovate the competition. He became the first president of the company who wasn’t a Ford. His family had moved to the US from Ireland to flee the Great Irish Famine. Not many generations later he got an MBA from Harvard before he became a captain in the United States Army Air Forces during World War II primarily as an analyst. Henry Ford the second hired his whole group to help with the company.  As many in politics and the military learn, companies and nations are very different. They did well at first, reducing the emphasis on big nuclear first strike capabilities and developing other military capabilities. One of those was how to deal with guerrilla warfare and counterinsurgencies. That became critical in Vietnam, a war between the communist North Vietnamese and the South Vietnamese. The North was backed by North Korea, China, and the Soviet Union, the South backed by the United States, South Korea, Australia. Others got involved but those were the main parties. We can think of McNamara’s use of computers to provide just in time provisioning of armed forces and move spending to where it could be most impactful, which slashed over $10 billion in military spending. As the Vietnam war intensified, statistically the number of troops killed by Americans vs American casualties made it look computationally like the was was being won. In hindsight we know it was not.  Under McNamara, ARPA hired Simulmatics to study the situation on the ground. They would merge computers, information warfare, psychological warfare, and social sciences. The Vietnamese that they interviewed didn’t always tell them the truth. After all, maybe they were CIA agents. Many of the studies lacked true scholars as the war was unpopular back home. People who collected data weren’t always skilled at the job. They spoke primarily with those they didn’t get shot at as much while going to see. In general, the algorithms might have worked or might not have worked - but they had bad data. Yet Simulmatics sent reports that the operations were going well to McNamara. Many in the military would remember this as real capabilities at cyber warfare and information warfare were developed in the following decades. Back home, Simulmatics also became increasingly tied up in things Kennedy might have arguably fought against. There were riots, civil rights protests, and Simulatics took contracts to simulate racial riots. Some felt they could riot or go die in in the jungles of Vietnam. The era of predictive policing had begun as the hope of the early 1960s turned into the apathy of the late 1960s. Martin Luther King Jr spoke out again riot prediction, yet Simulmatics pushed on. Whether their insights were effective in many of the situations, just like in Vietnam - was dubious. They helped usher in the era of Surveillance capitalism, in a way. But the arrival of computers in ad agencies meant that if they hadn’t of, someone else would have.  People didn’t take kindly to being poked, prodded, and analyzed intellectually. Automation took jobs, which Kennedy had addressed in rhetoric if not in action. The war was deeply unpopular as American soldiers came home from a far off land in caskets. The link between Simulmatics and academia was known. Students protested against them and claimed they were war criminals. The psychological warfare abroad, being on the wrong side of history at home with the race riots, and the disintegrating military-industrial-university complex didn’t help. There were technical issues. The technology had changed away from languages like FORTRAN. Further, the number of data points required and how they were processed required what we now call “Big Data” and “machine learning.” Those technologies showed promise early but more mathematics needed to be developed to fully weaponize the surveillance everything. More code and libraries needed to be developed to crunch the large amounts of statistics. More work needed to be done to get better data and process it. The computerization of the social sciences was just beginning and while people like Pool predicted the societal impacts we could expect, people at ARPA doubted the results and the company they created could not be saved as all these factors converged to put them into bankruptcy in 1970.  Their ideas and research lived on. Pool and others published some of their findings. Books opened the minds to the good and bad of what technology could do. The Southern politicians, or Dixiecrats, fell apart. Nixon embraced a new brand of conservatism as he lost the race to be the Governor of California to Pat Brown in 1962. There were charges of voter fraud from the 1960 election. The Mansfeld Amendment restricted military funding of basic research in 1969 and went into effect in 1970. Ike had warned of the growing links between universities as the creators of weapons of war like what Simulmatics signified and the amendment helped pull back funding for such exploits. As Lepore points out in her book, mid-century liberalism was dead. Nixon tapped into the silent majority who countered the counterculture of the 1960s. Crime rose and the conservatives became the party of law and order. He opened up relations with China, spun down the Vietnam war, negotiated with the Soviet leader Brezhnev to warm relations, and rolled back Johnson’s attempts at what had been called The Great Society to get inflation back in check. Under him the incarceration rate in the United States exploded. His presidency ended with Watergate and under Ford, Carter, Reagan, and Bush, the personal computer became prolific and the internet, once an ARPA project began to take shape. They all used computers to find and weigh issues, thaw the Cold War, and build a new digitally-driven world order. The Clinton years saw an acceleration of the Internet and by the early 2000s companies like PayPal were on the rise. One of their founders was Peter Thiel. Peter Thiel founded Palantir in 2003 then invested in companies like Facebook with his PayPal money. Palantir received backing from In-Q-Tel “World-class, cutting-edge technologies for National Security”. In-Q-Tel was founded in 1999 as the global technological evolution began to explode. While the governments of the world had helped build the internet, it wasn’t long before they realized it gave an asymmetrical advantage to newcomers. The more widely available the internet, the more far reaching attacks could go, the more subversive economic warfare could be. Governmental agencies like the United States Central Intelligence Agency (CIA) needed more data and the long promised artificial intelligence technologies to comb through that data. Agencies then got together and launched their own venture capital fund, similar to those in the private sector - one called In-Q-Tel. Palantir has worked to develop software for the US Immigration and Customers Enforcement, or ICE, to investigate criminal activities and allegedly used data obtained from Cambridge Analytica along with Facebook data. The initial aim of the company was to take technology developed for PayPal’s fraud detection and apply it to other areas like terrorism, with help from intelligence agencies. They help fight fraud for nations and have worked with the CIA, NSA, FBI, CDC, and various branches of the United States military on various software projects. Their Gotham project is the culmination of decades of predictive policing work.  There are dozens of other companies like Palantir. Just as Pool’s work on Six Degrees of Separation, social networks made the amount of data that could be harvested all the greater. Companies use that data to sell products. Nations use that data for propaganda. Those who get elected to run nations use that data to find out what they need to say to be allowed to do so. The data is more accurate with every passing year. Few of the ideas are all that new, just better executed. The original sin mostly forgotten, we still have to struggle with the impact and ethical ramifications. Politics has always had a bit of a ruse in a rise to power. Now it’s less about personal observation and more about the observations and analyses that can be gleaned from large troves of data. The issues brought up in books like The 480 are as poignant today as they were in the 1950s.
10/14/202227 minutes, 43 seconds
Episode Artwork

Taiwan, TSMC, NVIDIA, and Foundries

Taiwan is a country about half the size of Maine with about 17 times the population of that state. Taiwan sits just over a hundred miles off the coast of mainland China. It’s home to some 23 and a half million humans, roughly half way between Texas and Florida or a few more than live in Romania for the Europeans. Taiwan was connected to mainland China by a land bridge in the Late Pleistocene and human remains have been found dating back to 20,000 to 30,000 years ago. About half a million people on the island nation are aboriginal, or their ancestors are from there. But the population became more and more Chinese in recent centuries. Taiwan had not been part of China during the earlier dynastic ages but had been used by dynasties in exile to attack one another and so became a part of the Chinese empire in the 1600s. Taiwan was won by Japan in the late 1800s and held by the Japanese until World War II. During that time, a civil war had raged on the mainland of China with the Republic of China eventually formed as the replacement government for the Qing dynasty following a bloody period of turf battles by warlords and then civil war. Taiwan was in martial law from the time the pre-communist government of China retreated there during the exit of the Nationalists from mainland China in the 1940s to the late 1980. During that time, just like the exiled Han dynasty, they orchestrated war from afar. They stopped fighting, much like the Koreans, but have still never signed a peace treaty. And so large parts of the world remained in stalemate.  As the years became decades, Taiwan, or the Republic of China as they still call themselves, has always had an unsteady relationship with the People’s Republic of China, or China as most in the US calls them. The Western world recognized the Republic of China and the Soviet and Chines countries recognized the mainland government. US President Richard Nixon visited mainland China in 1972 to re-open relations with the communist government there and relations slowly improved. The early 1970s was a time when much of the world still recognized the ruling government of Taiwan as the official Chinese government and there were proxy wars the two continued to fight. The Taiwanese and Chinese still aren’t besties. There are deep scars and propaganda that keep relations from being repaired.  Before World War II, the Japanese also invaded Hong Kong. During the occupation there, Morris Chang’s family became displaced and moved to a few cities during his teens before he moved Boston to go to Harvard and then MIT where he did everything to get his PhD except defend his thesis. He then went to work for Sylvania Semiconductor and then Texas Instruments, finally getting his PhD from Stanford in 1964. He became a Vice President at TI and helped build an early semiconductor designer and foundry relationship when TI designed a chip and IBM manufactured it. The Premier of Taiwan at the time, Sun Yun-suan, who played a central role in Taiwan’s transformation from an agrarian economy to a large exporter. His biggest win was when to recruit Chang to move to Taiwan and found TSCM, or Taiwan Semiconductor Manufacturing Company. Some of this might sound familiar as it mirrors stories from companies like Samsung in South Korea. In short, Japanese imperialism, democracies versus communists, then rapid economic development as a massive manufacturing powerhouse in large part due to the fact that semiconductor designers were split from semiconductor foundry’s or where chips are actually created.  In this case, a former Chinese national was recruited to return as founder and led TSMC for 31 years before he retired in 2018. Chang could see from his time with TI that more and more companies would design chips for their needs and outsource manufacturing. They worked with Texas Instruments, Intel, AMD, NXP, Marvell, MediaTek, ARM, and then the big success when they started to make the Apple chips. The company started down that path in 2011 with the A5 and A6 SoCs for iPhone and iPad on trial runs but picked up steam with the A8 and A9 through A14 and the Intel replacement for the Mac, the M1. They now sit on a half trillion US dollar market cap and are the largest in Taiwan. For perspective, their market cap only trails the GDP of the whole country by a few billion dollars.  Nvidia TSMC is also a foundry Nvidia uses. As of the time of this writing, Nvidia is the 8th largest semiconductor company in the world. We’ve already covered Broadcom, Qualcomm, Micron, Samsung, and Intel. Nvidia is a fabless semiconductor company and so design chips that vendors like TSMC manufacture.  Nvidia was founded by Jensen Huang, Chris Malachowsky, and Curtis Priem in 1993 in Santa Clara, California (although now incorporated in Delaware). Not all who leave the country they were born in due to war or during times of war return. Huang was born in Taiwan and his family moved to the US right around the time Nixon re-established relations with mainland China. Huang then went to grad school at Stanford before he became a CPU designer at AMD and a director at LSI Logic, so had experience as a do-er, a manager, and a manager’s manager.  He was joined by Chris Malachowsky and Curtis Priem, who had designed the IBM Professional Graphics Adapter and then the GX graphics chip at Sun.   because they saw this Mac and Windows and Amiga OS graphical interface, they saw the games one could play on machines, and they thought the graphics cards would be the next wave of computing. And so for a long time, Nvidia managed to avoid competition with other chip makers with a focus on graphics. That initially meant gaming and higher end video production but has expanded into much more like parallel programming and even cryptocurrency mining.   They were more concerned about the next version of the idea or chip or company and used NV in the naming convention for their files. When it came time to name the company, they looked up words that started with those letters, which of course don’t exist - so instead chose invidia or Nvidia for short, as it’s latin for envy - what everyone who saw those sweet graphics the cards rendered would feel.  They raised $20 million in funding and got to work. First with SGS-Thomson Microelectronics in 1994 to manufacture what they were calling a graphical-user interface accelerator that they packaged on a single chip. They worked with Diamond Multimedia Systems to install the chips onto the boards. In 1995 they released NV1. The PCI card was sold as Diamond Edge 3D and came with a 2d/3d graphics core with quadratic texture mapping. Screaming fast and Virtual Fighter from Sega ported to the platform.  DirectX had come in 1995. So Nviia released DirectX drivers that supported Direct3D, the api that Microsoft developed to render 3d graphics. This was a time when 3d was on the rise for consoles and desktops. Nvidia timed it perfectly and reaped the rewards when they hit a million sold in the first four months for the RIVA, a 128-bit 3d processor that got used as an OEM in 1997. Then the 1998 RIVAZX with RIVATNT for multi-texture 3D processing. They also needed more manufacturing support at this point and entered into a strategic partnership with TSMC to manufacture their boards. A lot of vendors had a good amount of success in their niches. By the late 1990s there were companies who made memory, or the survivors of the DRAM industry after ongoing price dumping issues. There were companies that made central processors like Intel. Nvidia led the charge for a new type of chip, the GPU. They invented the GPU in 1999 when they released the GeForce 256. This was the first single-chip GPU processor. This means integrated lightings, triangle setups, rendering, like the old math coprocessor but for video. Millions of polygons could be drawn on screens every second. They also released the Quadro Pro GPU for professional graphics and went public in 1999 at an IPO of $12 per share.  Nvidia used some of the funds from the IPO to scale operations, organically and inorganically. In 2000 they released the GeForce2 Go for laptops and acquired 3dfx, closing deals to get their 3d chips in devices from OEM manufacturers who made PCs and in the new Microsoft Xbox. By 2001 they hit $1 billion in revenues and released the GeForce 3 with a programmable GPU, using APIs to make their GPU a platform. They also released the nForce integrated graphics and so by 2002 hit 100 million processors out on the market. They acquired MediaQ in 2003 and partnered with game designer Blizzard to make Warcraft. They continued their success in the console market when the GeForce platform was used in the PS 3 in 2005 and by 2006 had sold half a billion processors. They also added the  CUDA architecture that year to put a general purpose GPU on the market and acquired Hybrid Graphics who develops 2D and 3D embedded software for mobile devices. In 2008 they went beyond the consoles and PCs when Tesla used their GPUs in cars. They also acquired PortalPlayer, who supplies semiconductors and software for personal media players and launched the Tegra mobile processor to get into the exploding mobile market. More acquisitions in 2008 but a huge win when the GeForce 9400M was put into Apple MacBooks. Then more smaller chips in 2009 when the Tegra processors were used in Android devices. They also continued to expand how GPUs were used. They showed up in Ultrasounds and in 2010 the Audi. By then they had the Tianhe-1A ready to go, which showed up in supercomputers and the Optimus. All these types of devices that could use a GPU meant they hit a billion processors sold in 2011, which is when they went dual core with the Tegra 2 mobile processor and entered into cross licensing deals with Intel.  At this point TSMC was able to pack more and more transistors into smaller and smaller places. This was a big year for larger jobs on the platform. By 2012, Nvidia got the Kepler-based GPUs out by then and their chips were used in the Titan supercomputer. They also released a virtualized GPU GRID for cloud processing.  It wasn’t all about large-scale computing efforts. The Tegra-3 and GTX 600 came out in 2012 as well. Then in 2013 the Tegra 4, a quad-core mobile processor, a 4G LTE mobile processor, Nvidia Shield for portable gaming, the GTX Titan, a grid appliance. In 2014 the Tegra K1 192, a shield tablet, and Maxwell. In 2015 came the TegraX1 with deep learning with 256 cores and Titan X and Jetson TX1 for smart machines, and the Nvidia Drive for autonomous vehicles. They continued that deep learning work with an appliance in 2016 with the DGX-1. The Drive got an update in the form of PX 2 for in-vehicle AI. By then, they were a 20 year old company and working on the 11th generation of the GPU and most CPU architectures had dedicated cores for machine learning options of various types.  2017 brought the Volta, Jetson TX2, and SHIELD was ported over to the Google Assistant. 2018 brought the Turing GPU architecture, the DGX-2, AGX Xavier, Clara, 2019 brought AGX Orin for robots and autonomous or semi-autonomous piloting of various types of vehicles. They also made the Jetson Nano and Xavier, and EGX for Edge Computing. At this point there were plenty of people who used the GPUs to mine hashes for various blockchains like with cryptocurrencies and the ARM had finally given Intel a run for their money with designs from the ARM alliance showing up in everything but a Windows device (so Apple and Android). So they tried to buy ARM from SoftBank in 2020. That deal fell through eventually but would have been an $8 billion windfall for Softbank since they paid $32 billion for ARM in 2016.  We probably don’t need more consolidation in the CPU sector. Standardization, yes. Some of top NVIDIA competitors include Samsung, AMD, Intel Corporation Qualcomm and even companies like Apple who make their own CPUs (but not their own GPUs as of the time of this writing). In their niche they can still make well over $15 billion a year.  The invention of the MOSFET came from immigrants Mohamed Atalla, originally from Egypt, and Dawon Kahng, originally from from Seoul, South Korea. Kahng was born in Korea in 1931 but immigrated to the US in 1955 to get his PhD at THE Ohio State University and then went to work for Bell Labs, where he and Atalla invented the MOSFET, and where Kahng retired. The MOSFET was an important step on the way to a microchip.  That microchip market with companies like Fairchild Semiconductors, Intel, IBM, Control Data, and Digital Equipment saw a lot of chip designers who maybe had their chips knocked off, either legally in a clean room or illegally outside of a clean room. Some of those ended in legal action, some didn’t. But the fact that factories overseas could reproduce chips were a huge part of the movement that came next, which was that companies started to think about whether they could just design chips and let someone else make them. That was in an era of increasing labor outsourcing, so factories could build cars offshore, and the foundry movement was born - or companies that just make chips for those who design them.  As we have covered in this section and many others, many of the people who work on these kinds of projects moved to the United States from foreign lands in search of a better life. That might have been to flee Europe or Asian theaters of Cold War jackassery or might have been a civil war like in Korea or Taiwan. They had contacts and were able to work with places to outsource too and given that these happened at the same time that Hong Kong, Singapore, South Korea, and Taiwan became safe and with no violence. And so the Four Asian Tigers economies exploded, fueled by exports and a rapid period of industrialization that began in the 1960s and continues through to today with companies like TSMC, a pure play foundry, or Samsung, a mixed foundry - aided by companies like Nvidia who continue to effectively outsource their manufacturing operations to companies in the areas. At least, while it’s safe to do so.  We certainly hope the entire world becomes safe. But it currently is not. There are currently nearly a million Rohingya refugees fleeing war in Myanmar. Over 3.5 million have fled the violence in Ukraine. 6.7 million have fled Syria. 2.7 million have left Afghanistan. Over 3 million are displaced between Sudan and South Sudan. Over 900,000 have fled Somalia. Before Ukranian refugees fled to mostly Eastern European countries, they had mainly settled in Turkey, Jordan, Lebanon, Pakistan, Uganda, Germany, Iran, and Ethiopia. Very few comparably settled in the 2 largest countries in the world: China, India, or the United States.  It took decades for the children of those who moved or sent their children abroad to a better life to be able to find a better life. But we hope that history teaches us to get there faster, for the benefit of all.
9/30/202231 minutes, 3 seconds
Episode Artwork

The History of Zynga and founder Mark Pincus

Mark Pincus was at the forefront of mobile technology when it was just being born. He is a recovering venture capitalist who co-founded his first company with Sunil Paul in 1995. FreeLoader was at the forefront of giving people the news through push technology, just as the IETF was in the process of ratifying HTTP2. He sold that for $38 million only to watch it get destroyed. But he did invest in a startup that one of the interns founded when he gave Sean Parker $100,000 to help found Napster.  Pincus then started Support.com, which went public in 2000. Then Tribe.net, which Cisco acquired. As a former user, it was fun while it lasted. Along the way, Pincus teamed up with Reid Hoffman, former PayPal executive and founder of LinkedIn and bought the Six Degrees patent that basically covered all social networking. Along the way, he invested in Friendster, Buddy Media, Brightmail, JD.com, Facebook, Snapchat, and Twitter.  Investing in all those social media properties gave him a pretty good insight into what trends were on the way. Web 2.0 was on the rise and social networks were spreading fast. As they spread, each attempted to become a platform by opening APIs for third-party developers. This led to an opening to create a new company that could build software that sat on top of these social media companies. Meanwhile, the gaming industry was in a transition from desktop and console games to hyper-casual games that are played on mobile devices. So Pincus recruited conspirators to start yet another company and with Michael Luxton, Andrew Trader, Eric Schiermeyer, Steve Schoettler, and Justin Waldron, Zinga was born in 2007. Actually Zinga is the dog. The company Zynga was born in 2007. Facebook was only three years old at the time, but was already at 14 million users to start 2007. That’s when they opened up APIs for integration with third party products through FBML, or Facebook Markup Language. They would have 100 million within a year. Given his track record selling companies and picking winners, Zynga easily raised $29 million to start what amounts to a social game studio. They make games that people access through social networks. Luxton, Schiermeyer, and Waldron created the first game, Zynga Poker in 2007. It was a simple enough Texas hold ’em poker game but rose to include tens of millions of players at its height, raking in millions in revenue.  They’d proven the thesis. Social networks, especially Facebook, were growing.. The iPhone came out in 2007. That only hardened their resolve. They sold poker chips in 2008. Then came FarmVille. FarmVille was launched in 2009 and an instant hit. The game went viral and had a million daily users in a week. It was originally written in flash and later ported to iPhones and other mobile platforms. It’s now been installed over 700 million times and ran until 2020, when Flash support was dropped by Facebook. FarmVille was free-to-play and simple. It had elements of a 4x game like Civilization, but was co-op, meaning players didn’t exterminate one another but instead earned points and thus rankings. In fact, players could help speed up tasks for one another. Players began with a farm - an empty plot of land. They earned experience points by doing routine tasks. Things like growing crops, upgrading items, plowing more and more land. Players took their crops to the market and sold them for coins. Coins could also be bought. If a player didn’t harvest their crops when they were mature, the crops would die. Thus, they had players coming back again and again. Push notifications helped remind people about the state of their farm. Or the news in FreeLoader-speak.   Some players became what we called dolphins, or players that spent about what they would on a usual game. Maybe $10 to $30. Others spent thousands, which we referred to as whales. They became the top game on Facebook and the top earner. They launched sequels as well, with FarmVille 2 and FarmVille 3.  They bought Challenge Games in 2010, which was founded by Andrew Busy to develop casual games a well. They bought 14 more companies. They grew to 750 employees. They opened offices in Bangalore, India and Ireland. They experimented with other platforms, like Microsoft’s MSN gaming environment and Google TV. They released CastleVille. And they went public towards the end of 2011. It was a whirlwind ride, and just really getting started. They released cute FarmVille toys.  They also released Project Z, Mafia Wars, Hanging with Friends, Adventure World, and Hidden Chronicles. And along the way they became a considerable advertising customer for Facebook, with ads showing up for Mafia Wars and Project Z constantly. Not only that, but their ads flooded other mobile ad networks, as The Sims Social and other games caught on and stole eyeballs. And players were rewarded for spamming the walls of other players, which helped to increase the viral nature of the early Facebook games. Pincus and the team built a successful, vibrant company. They brought in Jeff Karp and launched Pioneer Trail. Then another smash hit, Words with Friends. They bought Newtoy for $53.3 million to get it, after Paul and David Bettner who wrote a game called Chess with Friends a few years earlier. But revenues dropped as the Facebook ride they’d been on began to transition from people gaming in a web browser to mobile devices. All this growth and the company was ready for the next phase. In 2013, Zynga hired Donald Mattrick to be the CEO and Pincus moved to the role of Chief Product Officer. The brought in Alex Garden, the General Manager for Xbox Music , Video, and Reading, who had founded the Homeward creator Relic Entertainment back in the 1990s. The new management didn’t fix the decline. The old games continued to lose market share and Pincus came back to run the company as CEO and cut the staff by 18 percent. In 2015 they brought in Frank Gibeau to the board and by 2016 moved him to  CEO of the company. One challenge with the move to mobile was who got the processing payments. Microtransactions had gone through Facebook for years. They moved to Stripe in 2020. They acquired Gram Games, to get Merge Dragons! They bought Small Giant Games to get Empires & Puzzles. They bought Peak Games to get Toon Blast and Toy Blast. They picked up Rollic to get a boatload of actions and puzzle games. They bought Golf Rival by acquiring StarLark. And as of the time of this writing they have nearly 200 million players actively logging into their games. There are a few things to take from the story of Zynga. One is that a free game doesn’t put $2.8 billion in revenues on the board, which is what they made in 2021. Advertising amounts for just north of a half billion, but the rest comes from in app purchases. The next is that the transition from owner-operators is hard. Pincus and the founding team had a great vision. They executed and were rewarded by taking the company to a gangbuster IPO. The market changed and it took a couple of pivots to get there. That led to a couple of management shakeups and a transition to more of a portfolio mindset with the fleet of games they own. Another lesson is that larger development organizations don’t necessarily get more done. That’s why Zynga has had to acquire companies to get hits since around the time that they bought Words with Friends.  Finally, when a company goes public the team gets distracted. Not only is going through an IPO expensive and the ensuing financial reporting requirements a hassle to deal with, but it’s distracting. Employees look at stock prices during the day. Higher ranking employees have to hire a team of accountants to shuffle their money around in order to take advantage of tax loopholes. Growth leads to political infighting and power grabbing. There are also regulatory requirements with how we manage our code and technology that slow down innovation. But it all makes us better run and a safer partner eventually. All companies go through this. Those who navigate towards a steady state fastest have the best chance of surviving one more lesson: when the first movers prove a monetization thesis the ocean will get red fast. Zynga became the top mobile development company again after weathering the storm and making a few solid acquisitions. But as Bill Gates pointed out in the 1980s, gaming is a fickle business. So Zynga agreed to be acquired for $12.7 billion in 2022 by Take-Two Interactive, who now owns the Civilization, Grand Theft Auto, Borderlands, WWE, Red Dead, Max Payne, NBA 2K, PGA 2K, Bioshock, Duke Nukem, Rainbow Six: Rogue Spear, Battleship, Centipede, and the list goes on and on. They’ve been running a portfolio for a long time. Pincus took away nearly $200 million in the deal and about $350 million in Take-Two equity. Ads and loot boxes can be big business. Meanwhile, Pincus and Hoffman from LinkedIn work well together, apparently. They built Reinvent Capital, an investment firm that shows that venture capital has quite a high recidivism rate. They had a number of successful investments and SPACs.  Zynga was much more. They exploited Facebook to shoot up to hundreds of millions in revenue. That was revenue Facebook then decided they should have a piece of in 2011, which cut those Zynga revenues in half over time. This is an important lesson any time a huge percentage of revenue is dependent on another party who can change the game (no pun intended) at any time. Diversify. 
8/19/202216 minutes, 24 seconds
Episode Artwork

The Evolution Of Unix, Mac, and Chrome OS Shells

In the beginning was the command line. Actually, before that were punch cards and paper tape. But at Multics and RSTS and DTSS came out, programmers and users needed a way to interface with the computer through the teletypes and other terminals that appeared in the early age of interactive computing. Those were often just a program that sat on a filesystem eventually as a daemon, listening for input on keyboards. This was one of the first things the team that built Unix needed, once they had a kernel that could compile. And from the very beginning it was independent of the operating system. Due to the shell's independence from the underlying operating system, numerous shells have been developed during Unix’s history, albeit only a few have attained widespread use. Shells are also referred to as Command-Line Interpreters (or CLIs), processes commands a user sends from a teletype, then a terminal. This provided a simpler interface for common tasks, rather than interfacing with the underlying C programming. Over the years, a number of shells have come and gone. Some of the most basic and original commands came from Multics, but the shell as we know it today was introduced as the Thompson shell in the first versions of Unix. Ken Thompson introduced the first Unix shell in 1971 with the Thompson Shell, the ancestor of the shell we still find in /bin/sh. The shell ran in the background and allowed for a concise syntax for redirecting the output of commands to one another. For example, pass the output to a file with > or read input from a file with Others built tools for Unix as well. Bill Joy wrote a different text editor when Berkeley had Thompson out to install Unix on their PDP. And 1977 saw the earliest forms of what we would later call the Bourne Shell, written by Steve Bourne. This shell. The Bourne shell was designed with two key aims: to act as a command interpreter for interactively executing operating system commands and to facilitate scripting. One of the more important aspects of going beyond piping output into other commands and into a more advanced scripting language is the ability to perform conditional if/then statements, loops, and variables. And thus rather than learn C to write simple programmers, generations of engineers and end users could now use basic functional programming at a bourne shell.  Bill Joy created the C shell in 1978 while a graduate student at the University of California, Berkeley. It was designed for Berkeley Software Distribution (BSD) Unix machines. One of the main design goals of the C shell was to build a scripting language that seemed like C. Joy added one of my favorite features of every shell made after that one: command history. I’ve written many shell scripts by just cut-copy-pasting a few commands from my bash history and piping or variabalizing the output. Add to that the ability to use the up or down arrow to re-run previous commands and we got a huge productivity gain for people that did the same tasks, like editing a file. Simply scroll up through previous commands to run the same vi editor. That vi command also shipped first with BSD. There was another huge time saver out there in another operating system. An operating system called Tenex had name and command completion. The Tenex OS first shipped out of BBN, or Bolt, Beranek, and Newman, for PDPs. Unix had as well and so a number of early users had experience with both. Tenex had command completion, just hit the tab and the command being typed would automatically complete if the text started matched the text of a command in a path for commands. That project was started by Ken Greer at Carnegie Mellon University in 1975 and got merged into the C shell in 1981, adding the t for Tenex to the C for C shell  and gave us tcsh. Thus tcsh had backwards compatibility with csh. David Korn at Bell labs added the korn shell, or ksh in the early 1980s. He added the idea that the user interface could provide a number of editors. For example, use emacs or vi to edit files. He borrowed ideas from the c shell and made minor tweaks that provided outsized impacts to productivity. Even Microsoft added a Korn shell option into Windows NT, as though Dave Cutler was paying homage to another great programmer. Brian Fox then added on to the Bourne shell with bash. He was working with the Free Software Foundation with Richard Stallman, and they wanted a shell that could do more advanced scripting but whose source code was open source. They started the project in 1988 and shipped bash in 1989. Bash then went on to become the most widely used and distributed shell in the arsenal of the Unix programmer. Bash stands for Bourne Again Shell and so was backwards compatible with bourne shell but also added features from tcsh, korn, and C shell, staying mostly backwards compatible with other shells. Due to the licensing, bash became the de facto standard (and often default shell) for GNU/Linux distributions and serves as the standard interactive shell for users, located at /bin/bash location. Now we had command history, tabbed auto-completion, command-line editing, multiple paths, multiple options for interpreters, a directory stack, full environment variables, and the modern command line environment.  Paul Falstad created the initial version of zsh, or the Z Shell, in 1990. The Z shell (zsh) can be used interactively as a login shell or as more sophisticated command interpreter for shell scripting. As with previous shells, it is an optimized Bourne shell that incorporates several features from bash and tcsh and is mostly backwards compatible. Zsh comes with the tabbed auto-completion, regex integration (in addition to the standard glowing options available since the 1970s, additional shorthand for command scoping, but with a number of security features. The ability to limit memory and privilege escalations became critical in order to keep from having some of the same issues we’ve seen for decades with Windows and other operating systems as they evolved to meet Unix scripting, borrowing many a feature for Powershell from cousins in the Unix and Linux worlds. These are just the big ones. Sometimes it feels like every developer with a decent grasp of C and a workflow divergent from the norm (which is most developers), has taken a stab at developing their own shell. This is one of the great parts of having access to source code. The options are endless. At this point, we just take these productivity gains for granted. But it was decades of innovative approaches as Unix and then Linux and now MacOS and Android reached out to the rest of the world to change how we work. 
7/15/202212 minutes, 43 seconds
Episode Artwork

St Jude, Felsenstein, and Community Memory

Lee Felsenstein went to the University of California, Berkeley in the 1960s. He worked at the tape manufacturer Ampex, where Oracle was born out of before going back to Berkeley to finish his degree. He was one of the original members of the Homebrew Computer Club, and as with so many inspired by the Altair S-100 bus, designed the Sol-20, arguably the first microcomputer that came with a built-in keyboard that could be hooked up to a television in 1976. The Apple II was introduced the following year. Adam Osborne was another of the Homebrew Computer Club regulars who wrote An Introduction to Microcomputers and sold his publishing company to McGraw-Hill in 1979. Flush with cash, he enlisted Felsenstein to help create another computer, which became the Osborne 1. The first commercial portable computer, although given that it weighed almost 25 pounds, is more appropriate to call a luggable computer. Before Felsensten built computers, though, he worked with a few others on a community computing project they called Community Memory.  Judith Milhon was an activist in the 1960s Civil Rights movement who helped organize marches and rallies and went to jail for civil disobedience. She moved to Ohio, where she met Efrem Lipkin, and as with many in what we might think of as the counterculture now, they moved to San Francisco in 1968. St Jude, as she became called learned to program in 1967 and ended up at the Berkeley Computer Company after the work on the Berkeley timesharing projects was commercialized. There, she met Pam Hardt at Project One.  Project One was a technological community built around an alternative high school founded by Ralph Scott. They brought together a number of non-profits to train people in various skills and as one might expect in the San Francisco area counterculture they had a mix of artists, craftspeople, filmmakers, and people with deep roots in technology. So much so that it became a bit of a technological commune. They had a warehouse and did day care, engineering, film processing, documentaries, and many participated in anti-Vietnam war protests. They had all this space and Hardt called around to find the computer. She got an SDS-940 mainframe donated by TransAmerica in 1971. Xerox had gotten out of the computing business and TransAmerica’s needs were better suited for other computers at the time. They had this idea to create a bulletin board system for the community and created a project at Project One they called Resource One. Plenty thought computers were evil at the time, given their rapid advancements during the Cold War era, and yet many also thought there was incredible promise to democratize everything.  Peter Deutsch then donated time and an operating system he’d written a few years before. She then published a request for help in the People’s Computer Computer magazine and got a lot of people who just made their own things. An early precursor to maybe micro-services, where various people tinkered with data and programs. They were able to do so because of the people who could turn that SDS into a timesharing system.  St Jude’s partner Lipkin took on the software part of the project. Chris Macie wrote a program that digitized information on social services offered in the area that was maintained by Mary Janowitz, Sherry Reson, and Mya Shone. That was eventually taken over by the United Way until the 1990s.  Felsenstein helped with the hardware. They used teletype terminals to connect a video terminal and keyboard built into a wooden cabinet so real humans could access the system. The project then evolved into what was referred to as Community Memory. Community Memory Community Memory became the first public computerized bulletin board system established in 1973 in Berkeley, California. The first Community Memory terminal was located at Leopard’s Record in Berkeley. This was the first opportunity for people who were not studying the scientific subject to be able to use computers. It became very popular but soon was shut down by the founders because they face hurdles to replicate the equipment and languages being used. They were unable to expand the project.  This allowed them to expand the timesharing system into the community and became a free online community-based resource used to share knowledge, organize, and grow. The initial stage of Community Memory from 1973 to 1975, was an experiment to see how people would react to using computers to share information.  Operating from 1973 to 1992, it went from minicomputers to microcomputers as those became more prevelant. Before Resource One and Community Memory, computers weren’t necessarily used for people. They were used for business, scientific research, and military purposes. After Community Memory,  Felsenstein and others in the area and around the world helped make computers personal. Commun tty Memory was one aspect of that process but there were others that unfolded in the UK, France, Germany and even the Soviet Union - although those were typically impacted by embargoes and a lack of the central government’s buy-in for computing in general.  After the initial work was done, many of the core instigators went in their own directions. For example, Felsenstein went on to create the SOL and pursue his other projects in personal computing. Many had families or moved out of the area after the Vietnam War ended in 1975. The economy still wasn’t great, but the technical skills made them more employable.  Some of the developers and a new era of contributors regrouped and created a new non-profit in 1977. They started from scratch and developed their own software, database, and communication packages. It was very noisy so they encased it in a card box. It had a transparent plastic top so they could see what was being printed out. This program ran from 1984 to 1989.  After more research, a new terminal was released in 1989 in Berkeley. By then it had evolved into a pre-web social network.  The modified keyboard had brief instructions mounted on it, which showed the steps to send a message, how to attach keywords to messages, and how to search those keywords to find messages from others.  Ultimately, the design underwent three generations, ending in a network of text-based browsers running on basic IBM PCs accessing a Unix server. It was never connected to the Internet, and closed in 1992. By then, it was large, unpowered, and uneconomical to run in an era where servers and graphical interfaces were available. A booming economy also ironically meant a shortage of funding. The job market exploded for programmers in the decade that led up to the dot com bubble and with inconsistent marketing and outreach, Community Memory shut down in 1992. Many of the people involved with Resource One and Community memory went on to have careers in computing. St Jude helped found the cypherpunks and created Mondo 2000 magazine, a magazine dedicated to that space where computers meet culture. She also worked with Efrem Lipkin on CoDesign, and he was a CTO for many of the dot coms in the late 1990s. Chris Neustrup became a programmer for Agilent. The whole operation had been funded by various grants and donations and while there haven’t been any studies on the economic impact due to how hard it is to attribute inspiration rather than direct influence, the payoff was nonetheless considerable.
6/25/202211 minutes, 38 seconds
Episode Artwork

Research In Motion and the Blackberry

Lars Magnus Ericsson was working for the Swedish government that made telegraph equipment in the 1870s when he started a little telegraph repair shop in 1976. That was the same year the telephone was invented. After fixing other people’s telegraphs and then telephones he started a company making his own telephone equipment. He started making his own equipment and by the 1890s was shipping gear to the UK. As the roaring 20s came, they sold stock to buy other companies and expanded quickly. Early mobile devices used radios to connect mobile phones to wired phone networks and following projects like ALOHANET in the 1970s they expanded to digitize communications, allowing for sending early forms of text messages, the way people might have sent those telegraphs when old Lars was still alive and kicking. At the time, the Swedish state-owned Televerket Radio was dabbling in this space and partnered with Ericsson to take first those messages then as email became a thing, email, to people wirelessly using the 400 to 450 MHz range in Europe and 900 MHz in the US. That standard went to the OSI and became a 1G wireless packet switching network we call Mobitex. Mike Lazaridis was born in Istanbul and moved to Canada in 1966 when he was five, attending the University of Waterloo in 1979. He dropped out of school to take a contract with General Motors to build a networked computer display in 1984. He took out a loan from his parents, got a grant from the Canadian government, and recruited another electrical engineering student, Doug Fregin from the University of Windsor, who designed the first circuit boards. to join him starting a company they called Research in Motion. Mike Barnstijn joined them and they were off to do research.  After a few years doing research projects, they managed to build up a dozen employees and a million in revenues. They became the first Mobitex provider in America and by 1991 shipped the first Mobitex device. They brought in James Balsillie as co-CEO, to handle corporate finance and business development in 1992, a partnership between co-CEOs that would prove fruitful for 20 years.  Some of those work-for-hire projects they’d done involved reading bar codes so they started with point-of-sale, enabling mobile payments and by 1993 shipped RIMGate, a gateway for Mobitex. Then a Mobitex point-of-sale terminal and finally with the establishment of the PCMCIA standard, a  PCMCIP Mobitex modem they called Freedom. Two-way paging had already become a thing and they were ready to venture out of PoS systems. So  in 1995, they took a $5 million investment to develop the RIM 900 OEM radio modem. They also developed a pager they called the Inter@ctive Pager 900 that was capable of  two-way messaging the next year. Then they went public on the Toronto Stock Exchange in 1997. The next year, they sold a licensing deal to IBM for the 900 for $10M dollars. That IBM mark of approval is always a sign that a company is ready to play in an enterprise market. And enterprises increasingly wanted to keep executives just a quick two-way page away. But everyone knew there was a technology convergence on the way. They worked with Ericsson to further the technology and over the next few years competed with SkyTel in the interactive pager market. Enter The Blackberry They knew there was something new coming. Just as the founders know something is coming in Quantum Computing and run a fund for that now. They hired a marketing firm called Lexicon Branding to come up with a name and after they saw the keys on the now-iconic keyboard, the marketing firm suggested BlackBerry. They’d done the research and development and they thought they had a product that was special. So they released the first BlackBerry 850 in Munich in 1999. But those were still using radio networks and more specifically the DataTAC network. The age of mobility was imminent, although we didn’t call it that yet. Handspring and Palm each went public in 2000.  In 2000, Research In Motion brought its first cellular phone product in the BlackBerry 957, with push email and internet capability. But then came the dot com bubble. Some thought the Internet might have been a fad and in fact might disappear. But instead the world was actually ready for that mobile convergence. Part of that was developing a great operating system for the time when they released the BlackBerry OS the year before. And in 2000 the BlackBerry was named Product of the Year by InfoWorld.  The new devices took the market by storm and shattered the previous personal information manager market, with shares of that Palm company dropping by over 90% and Palm OS being setup as it’s own corporation within a couple of years. People were increasingly glued to their email. While the BlackBerry could do web browsing and faxing over the internet, it was really the integrated email access, phone, and text messaging platform that companies like General Magic had been working on as far back as the early 1990s. The Rise of the BlackBerry The BlackBerry was finally the breakthrough mobile product everyone had been expecting and waiting for. Enterprise-level security, integration with business email like Microsoft’s Exchange Server, a QWERTY keyboard that most had grown accustomed to, the option to use a stylus, and a simple menu made the product an instant smash success. And by instant we mean after five years of research and development and a massive financial investment. The Palm owned the PDA market. But the VII cost $599 and the BlackBerry cost $399 at the time (which was far less than the $675 Inter@ctive Pager had cost in the 1990s). The Palm also let us know when we had new messages using the emerging concept of push notifications. 2000 had seen the second version of the BlackBerry OS and their AOL Mobile Communicator had helped them spread the message that the wealthy could have access to their data any time. But by 2001 other carriers were signing on to support devices and BlackBerry was selling bigger and bigger contracts. 5,000 devices, 50,000 devices, 100,000 devices. And a company called Kasten Chase stepped in to develop a secure wireless interface to the Defense Messaging System in the US, which opened up another potential two million people in the defense industry They expanded the service to cover more and more geographies in 2001 and revenues doubled, jumping to 164,000 subscribers by the end of the year. That’s when they added wireless downloads so could access all those MIME attachments in email and display them. Finally, reading PDFs on a phone with the help of GoAmerica Communications! And somehow they won a patent for the idea that a single email address could be used on both a mobile device and a desktop. I guess the patent office didn’t understand why IMAP  was invented by Mark Crispin at Stanford in the 80s, or why Exchange allowed multiple devices access to the same mailbox. They kept inking contracts with other companies. AT&T added the BlackBerry in 2002 in the era of GSM. The 5810 was the first truly convergent BlackBerry that offered email and a phone in one device with seamless SMS communications. It shipped in the US and the 5820 in Europe and Cingular Wireless jumped on board in the US and Deutsche Telekom in Germany, as well as Vivendi in France, Telecom Italia in Italy, etc. The devices had inched back up to around $500 with service fees ranging from $40 to $100 plus pretty limited data plans. The Tree came out that year but while it was cool and provided a familiar interface to the legions of Palm users, it was clunky and had less options for securing communications. The NSA signed on and by the end of the year they were a truly global operation, raking in revenues of nearly $300 million.  The Buying Torndado They added web-based application in 2003, as well as network printing. They moved to a Java-based interface and added the 6500 series, adding a walkie-talkie function. But that 6200 series at around $200 turned out to be huge. This is when they went into that thing a lot of companies do - they started suing companies like Good and Handspring for infringing on patents they probably never should have been awarded. They eventually lost the cases and paid out tens of millions of dollars in damages. More importantly they took their eyes off innovating, a common mistake in the history of computing companies. Yet there were innovations. They released Blackberry Enterprise Server in 2004 then bolted on connectors to Exchange, Lotus Domino, and allowed for interfacing with XML-based APIs in popular enterprise toolchains of the day. They also later added support for GroupWise. That was one of the last solutions that worked with symmetric key cryptography I can remember using and initially required the devices be cradled to get the necessary keys to secure communications, which then worked over Triple-DES, common at the time. One thing we never liked was that messages did end up living at Research in Motion, even if encrypted at the time. This is one aspect that future types of push communications would resolve. And Microsoft Exchange’s ActiveSync.  By 2005 there were CVEs filed for BlackBerry Enterprise Server, racking up 17 in the six years that product shipped up to 5.0 in 2010 before becoming BES 10 and much later Blackberry Enterprise Mobility Management, a cross-platform mobile device management solution. Those BES 4 and 5 support contracts, or T-Support, could cost hundreds of dollars per incident. Microsoft had Windows Mobile clients out that integrated pretty seamlessly with Exchange. But people loved their Blackberries. Other device manufacturers experimented with different modes of interactivity. Microsoft made APIs for pens and keyboards that flipped open. BlackBerry added a trackball in 2006, that was always kind of clunky. Nokia, Ericsson, Motorola, and others were experimenting with new ways to navigate devices, but people were used to menus and even styluses. And they seemed to prefer a look and feel that seemed like what they used for the menuing control systems on HVAC controls, video games, and even the iPod.  The Eye Of The Storm A new paradigm was on the way. Apple's iPhone was released in 2007 and Google's Android OS in 2008. By then the BlackBerry Pearl was shipping and it was clear which devices were better. No one saw the two biggest threats coming. Apple was a consumer company. They were slow to add ActiveSync policies, which many thought would be the corporate answer to mobile management as group policies in Active Directory had become for desktops. Apple  and Google were slow to take the market, as BlackBerry continued to dominate the smartphone industry well into 2010, especially once then-president Barack Obama strong-armed the NSA into allowing him to use a special version of the BlackBerry 8830 World Edition for official communiques. Other world leaders followed suit, as did the leaders of global companies that had previously been luddites when it came to constantly being online. Even Eric Schmidt, then chairman of google loved his Crackberry in 2013, 5 years after the arrival of Android. Looking back, we can see a steady rise in iPhone sales up to the iPhone 4, released in 2010. Many still said they loved the keyboard on their BlackBerries. Organizations had built BES into their networks and had policies dating back to NIST STIGs. Research in Motion owned the enterprise and held over half the US market and a fifth of the global market. That peaked in 2011. BlackBerry put mobility on the map. But companies like AirWatch, founded in 2003 and  MobileIron, founded in 2007, had risen to take a cross-platform approach to the device management aspect of mobile devices. We call them Unified Endpoint Protection products today and companies could suddenly support BlackBerry, Windows Mobile, and iPhones from a single console. Over 50 million Blackberries were being sold a year and the stock was soaring at over $230 a share.  Today, they hold no market share and their stock performance shows it. Even though they’ve pivoted to more of a device management company, given their decades of experience working with some of the biggest and most secure companies and governments in the world. The Fall Of The BlackBerry The iPhone was beautiful. It had amazing graphics and a full touch screen. It was the very symbol of innovation. The rising tide of the App Store also made it a developers playground (no pun intended). It was more expensive than the Blackberry, but while they didn’t cater to the enterprise, they wedged their way in there with first executives and then anyone. Initially because of ActiveSync, which had come along in 1996 mostly to support Windows Mobile, but by Exchange Server 2003 SP 2 could do almost anything Outlook could do - provided software developers like Apple could make the clients work. So by 2011, Exchange clients could automatically locate a server based on an email address (or more to the point based on DNS records for the domain) and work just as webmail, which was open in almost every IIS implementation that worked with Exchange. And Office365 was released in 2011, paving the way to move from on-prem Exchange to what we now call “the cloud.” And Google Mail had been around for 7 years by then and people were putting it on the BlackBerry as well, blending home and office accounts on the same devices at times. In fact, Google licensed Exchange ActiveSync, or EAS in 2009 so support for Gmail was showing up on a variety of devices. BlackBerry had everything companies wanted. But people slowly moved to that new iPhone. Or Androids when decent models of phones started shipping with the OS on them. BlackBerry stuck by that keyboard, even though it was clear that people wanted full touchscreens. The BlackBerry Bold came out in 2009. BlackBerry had not just doubled down with the keyboard instead of full touchscreen, but they tripled down on it. They had released the Storm in 2008 and then the Storm in 2009 but they just had a different kind of customer. Albeit one that was slowly starting to retire. This is the hard thing about being in the buying tornado. We’re so busy transacting that we can’t think ahead to staying in the eye that we don’t see how the world is changing outside of it.  As we saw with companies like Amdahl and Control Data, when we only focus on big customers and ignore the mass market we leave room for entrants in our industries who have more mass appeal. Since the rise of the independent software market following the IBM anti-trust cases, app developers have been a bellwether of successful platforms. And the iPhone revenue split was appealing to say the least.  Sales fell off fast. By 2012, the BlackBerry represented less than 6 percent of smartphones sold and by the start of 2013 that number dropped in half, falling to less than 1 percent in 2014. That’s when the White House tested replacements for the Blackberry. There was a small bump in sales when they finally released a product that had competitive specs to the iPhone, but it was shortly lived. The Crackberry craze was officially over.  BlackBerry shot into the mainstream and brought the smartphone with them. They made the devices secure and work seamlessly in corporate environments and for those who could pay money to run BES or BIS. They proved the market and then got stuck in the Innovator’s Dilemna. They became all about features that big customers wanted and needed. And so they missed the personal part of personal computing. Apple, as they did with the PC and then graphical user interfaces saw a successful technology and made people salivate over it. They saw how Windows had built a better sandbox for developers and built the best app delivery mechanism the world has seen to date. Google followed suit and managed to take a much larger piece of the market with more competitive pricing.  There is so much we didn’t discuss, like the short-lived Playbook tablet from BlackBerry. Or the Priv. Because for the most part, they a device management solution today. The founders are long gone, investing in the next wave of technology: Quantum Computing. The new face of BlackBerry is chasing device management, following adjacencies into security and dabbling in IoT for healthcare and finance. Big ticket types of buys that include red teaming to automotive management to XDR. Maybe their future is in the convergence of post-quantum security, or maybe we’ll see their $5.5B market cap get tasty enough for one of those billionaires who really, really, really wants their chicklet keyboard back. Who knows but part of the fun of this is it’s a living history.    
6/17/202225 minutes, 45 seconds
Episode Artwork

Colossal Cave Adventure

Imagine a game that begins with a printout that reads: You are standing at the end of a road before a small brick building. Around you is a forest. A small stream flows out of the building and down a gully. In the distance there is a tall gleaming white tower. Now imagine typing some information into a teletype and then reading the next printout. And then another. A trail of paper lists your every move. This is interactive gaming in the 1970s. Later versions had a monitor so a screen could just show a cursor and the player needed to know what to type. Type N and hit enter and the player travels north. “Search” doesn’t work but “look” does. “Take water” works as does “Drink water” but it takes hours to find dwarves and dragons and figure out how to battle or escape. This is one of the earliest games we played and it was marvelous. The game was called Colossal Cave Adventure and it was one of the first conversational adventure games. Many came after it in the 70s and 80s, in an era before good graphics were feasible. But the imagination was strong.  The Oregon Trail was written before it, in 1971 and Trek73 came in 1973, both written for HP minicomputers. Dungeon was written in 1975 for a PDP-10. The author, Don Daglow, went on the work on games like Utopia and Neverwinter Nights Another game called Dungeon showed up in 1975 as well, on the PLATO network at the University of Illinois Champagne-Urbana. As the computer monitor spread, so spread games. William Crowther got his degree in physics at MIT and then went to work at Bolt Baranek and Newman during the early days of the ARPANET. He was on the IMP team, or the people who developed the Interface Message Processor, the first nodes of the packet switching ARPANET, the ancestor of the Internet. They were long hours, but when he wasn’t working, he and his wife Pat explored caves. She was a programmer as well. Or he played the new Dungeons & Dragons game that was popular with other programmers. The two got divorced in 1975 and like many suddenly single fathers he searched for something for his daughters to do when they were at the house. Crowther combined exploring caves, Dungeons & Dragons, and FORTRAN to get Colossal Cave Adventure, often just called Adventure. And since he worked on the ARPANET, the game found its way out onto the growing computer network. Crowther moved to Palo Alto and went to work for Xerox PARC in 1976 before going back to BBN and eventually retiring from Cisco. Crowther loosely based the game mechanics on the ELIZA natural language processing work done by Joseph Weizenbaum at the MIT Artificial Intelligence Laboratory in the 1960s. That had been a project to show how computers could be shown to understand text provided to computers. It was most notably used in tests to have a computer provide therapy sessions. And writing software for the kids or gaming can be therapeutic as well. As can replaying happier times.  Crowther explored Mammoth Cave National Park in Kentucky in the early 1970s. The characters in the game follow along his notes about the caves, exploring the area around it using natural language while the computer looked for commands in what was entered. It took about 700 lines to do the original Fortran code for the PDP-10 he had at his disposal at BBN. When he was done he went off on vacation, and the game spread. Programmers in that era just shared code. Source needed to be recompiled for different computers, so they had to. Another programmer was Don Woods, who also used a PDP-10. He went to Princeton in the 1970s and was working at the Stanford AI Lab, or SAIL, at the time. He came across the game and asked Crowther if it would be OK to add a few features and did. His version got distributed through DECUS, or the Digital Equipment Computer Users Society. A lot of people went there for software at the time. The game was up to 3,000 lines of code when it left Woods. The adventurer could now enter the mysterious cave in search of the hidden treasures. The concept of the computer as a narrator began with Collosal Cave Adventure and is now widely used. Although we now have vast scenery rendered and can point and click where we want to go so don’t need to type commands as often. The interpreter looked for commands like “move”, “interact” with other characters, “get” items for the inventory, etc. Woods went further and added more words and the ability to interpret punctuation as well. He also added over a thousand lines of text used to identify and describe the 40 locations. Woods continued to update that game until the mid-1990s. James Gillogly of RAND ported the code to C so it would run on the newer Unix architecture in 1977  and it’s still part of many a BSD distribution. Microsoft published a version of Adventure in 1979 that was distributed for the Apple II and TRS-80 and followed that up in 1981 with a version for Microsoft DOS or MS-DOS. Adventure was now a commercial product. Kevin Black wrote a version for IBM PCs. Peter Gerrard ported it to Amiga Bob Supnik rose to a Vice President at Digital Equipment, not because he ported the game, but it didn’t hurt. And throughout the 1980s, the game spread to other devices as well. Peter Gerrard implemented the version for the Tandy 1000. The Original Adventure was a version that came out of Aventuras AD in Spain. They gave it one of the biggest updates of all. Colossal Cave Adventure was never forgotten, even though it was Zork was replaced. Zork came along in 1977 and Adventureland in 1979. Ken and Roberta Williams played the game in 1979. Ken had bounced around the computer industry for awhile and had a teletype terminal at home when he came across Colossal Cave Adventure in 1979. The two became transfixed and opened their own company to make the game they released the next year called Mystery House. And the text adventure genre moved to a new level when they sold 15,000 copies and it became the first hit. Rogue, and others followed, increasingly interactive, until fully immersive graphical games replaced the adventure genre in general. That process began when Warren Robinett of Atari created the 1980 game, Adventure.  Robinett saw Colossal Cave Adventure when he visited the Stanford Artificial Intelligence Laboratory in 1977. He was inspired into a life of programming by a programming professor he had in college named Ken Thompson while he was on sabbatical from Bell Labs. That’s where Thompason, with Dennis Ritchie and one of the most amazing teams of programmers ever assembled, gave the world Unix and the the C programming language at Bell Labs. Adventure game went on to sell over a million copies and the genre of fantasy action-adventure games moved from text to video.
6/2/202211 minutes, 28 seconds
Episode Artwork

MySpace And My First Friend, Tom

Before Facebook, there was MySpace. People logged into a web page every day to write to friends, show off photos, and play music. Some of the things we still do on social networks. The world had been shifting to personal use of computers since the early days when time sharing systems were used in universities. Then came the Bulletin Board Systems of the 80s. But those were somewhat difficult to use and prone to be taken over by people like the ones who went on to found DefCon and hacking collectives.  Then in the 1990s computers and networks started to get easier to use. We got tools like AOL Instant Messenger and a Microsoft knockoff called Messenger. It’s different ‘cause it doesn’t say Instant. The rise of the World Wide Web meant that people could build their own websites in online communities. We got these online communities like Geocities in 1994, where users could build their own little web page. Some were notes from classes at universities; others how to be better at dressing goth. They tried to sort people by communities they called cities, and then each member got an address number in their community. They grew fast and even went public before being acquired by Yahoo! in 1999. Tripod showed up the year after Geocities came out and got acquired by Yahoo! competitor Lycos in 1998, signaling that portal services in a pre-modern search engine world would be getting into more content to show ads to eyeballs. Angelfire was another that started in 1996 and ended up in the Lycos portfolio as well. More people had more pages and that meant more eyeballs to show ads to. No knowledge of HTML was really required but it did help to know some. The GeoCities idea about communities was a good one. Turns out people liked hanging out with others like themselves online. People liked reading thoughts and ideas and seeing photos if they ever bothered to finish downloading. But forget to bookmark a page and it could be lost in the cyberbits or whatever happened to pages when we weren’t looking at them.  The concept of six agrees of Kevin Bacon had been rolling around a bit, so Andrew Weinreich got the idea to do something similar to Angelfire and the next year created SixDegrees.com. It was easy to evolve the concept to bookmark pages by making connections on the site. Except to get people into the site and signing up the model appeared to be the flip side: enter real world friends and family and they were invited to join up. Accepted contacts could then post on each others bulletin boards or send messages to one another. We could also see who our connections were connected to, thus allowing us to say “oh I met that person at a party.” Within a few years the web of contacts model was so successful that it had a few million users and was sold for over $100 million. By 2000 it was shut down but had proven there was a model there that could work. Xanga came along the next year as a weblog and social networking site but never made it  to the level of success. Classmates.com is still out there as well, having been founded in 1995 to build a web of contacts for finding those friends from high school we lost contact with. Then came Friendster and MySpace in 2003. Friendster came out of the gate faster but faded away quicker. These took the concepts of SixDegrees.com where users invited friends and family but went a little further, allowing people to post on one another boards.  MySpace went a little further. They used some of the same concepts Geocities used and allowed people to customize their own web pages. When some people learned HTML to edit their pages, they got the bug to create. And so a new generation of web developers was created as people learned to layout pages and do basic web programming in order to embed files, flash content, change backgrounds, and insert little DHTML or even JavaScript snippets. MySpace was co-founded by Chris DeWolfe, Uber Whitcomb, Josh Berman, and Tom Anderson while working at an incubator or software holding company called eUniverse, which was later renamed to Intermix Media. Brad Greenspan founded that after going to UCLA and then jumping headfirst into the startup universe. He created Entertainment Universe, then raised $2M in capital from Lehman Brothers, another $5M from others and bought a young site called CD Universe, which was selling Compact Disks online. He reverse merged that into an empty public shell company, like a modern SPAC works, and was suddenly the CEO of a public company, expanding into online DVD sales. Remember, these were the days leading up to the dot com bubble. There was a lot of money floating around. They expanded into dating sites and other membership programs. We’d think of monthly member fees as Monthly Recurring Revenue now, but at the time there was so much free stuff on the internet that those most sites just gave it away and built revenue streams on advertising revenues. CDs and DVDs have data on them. Data can be shared. Napster proved how lucrative that could be by then. Maybe that was something eUniverse should get into. DeWolfe created a tool called Sitegeist, which was a site with a little dating, a little instant messaging, and a little hyper localized search. It was just a school project but got him thinking. Then, like millions of us were about to do, he met Tom. Tom was a kid from the valley who’d been tinkering with computers for years, as “Lord Flathead” who’d been busted hacking as a kid before going off to the University of California at Berkeley before coming home to LA to do software QA for an online storage company. The company he worked for got acquired as a depressed asset by eUniverse in 2002, along with Josh Berman. They got matched up with DeWolfe, and saw this crazy Friendster coming out of nowhere and decided to build something like it. They had a domain they weren’t using called MySpace.com, which they were going to use for another online storage project. So they grabbed Aber Whitcomb, fired up a ColdFusion IDE and given the other properties eUniverse was sitting on had the expertise to get everything up and running fairly quickly. So they launched MySpace internally first and then had little contests to see who could get the most people to sign up. eUniverse had tens of millions of users on the other properties so they emailed them too. Within two years they had 20 million users and were the centerpiece of the eUniverse portfolio. Wanting in on what the young kids were doing these days, Rupert Murdoch and News Corporation, or NewsCorp for short, picked up the company for $580 Million in cash. It’s like an episode of Succession, right? After the acquisition of Myspace by news corporation, Myspace continued its exponential growth. Later in the year, the site started signing up 200,000 new users every day. About a year later, it was registering approx. 320,000 users each day. They localized into different languages and became the biggest website in the US. So they turned on the advertising machine, paying back their purchase price by doing $800 million in revenue back to NewsCorp.  MySpace had become the first big social media platform that was always free that allowed users to freely express their minds and thoughts with millions of other users, provided they were 13 years or older. They restricted access to profiles of people younger than 16 years in such a way that they couldn’t be viewed by people over 18 years old. That was to keep sexual predators from accessing the profile of a minor. Kids turned out to be a challenge. In 2006, during extensive research the company began detecting and deleting profiles of registered sex offenders which had started showing up on the platform.  Myspace partnered with Sentinel Tech Holdings Corporation to build a searchable, national database containing names, physical descriptions, and other identity details known as the Sentinel Safe which allowed them to keep track of over half a million registered sex offenders from  U.S. government records. This way they developed the first national database of convicted sex offenders to protect kids on the platform, which they then provided to state attorney generals when the sex offenders tried to use MySpace.  Facebook was created in 2004 and Twitter was created in 2006. They picked up market share, but MySpace continued to do well in 2007 then not as well in 2008. By 2009, Facebook surpassed Myspace in the number of unique U.S. visitors. Myspace began a rapid decline and lost members fast. Network effects can disappear as quickly as they are created. They kept the site simple and basic; people would log in, make new friends, and share music, photos, and chat with people. Facebook and Twitter constantly introduced new features for users to explore; this kept the existing users on the site and attracted more users. Then social media companies like twitter began to target users on Myspace.  New and more complicated issues kept coming up. Pages were vandalized, there were phishing attacks, malware got posted to the site, and there were outages as the ColdFusion code had been easy to implement but proved harder to hyperscale. In fact, few had needed to scale a site like MySpace had in that era. Not only were users abandoning the platform, but employees at Myspace started to leave. The changes to MySpace’s executive ranks went down quicky in June 2009 by a layoff of 37.5% of its workforce reducing, the employees went down from 1,600 to 1,000. Myspace attempted to rebrand itself as primarily a music site to try and gain the audience they lost. They changed the layout to make it look more attractive but continued a quick decline just as Facebook and Twitter were in the midst of a meteoric rise. In 2011 News Corporation sold Myspace to Specific Media and Justin Timberlake for around $35 million. Timberlake wanted to make a platform where fans could go and communicate with their favorite entertainers, listen to new music, watch videos, share music, and connect with others who liked the same things. Like Geocities but for music lovers. They never really managed to turn things around. In 2016, Myspace and its parent company were acquired by Time Inc. and later Time inc. was in turn purchased by the Meredith Corporation. A few months later the news cycle on and about the platform became less positive. A hacker retrieved 427 million Myspace passwords and tried to sell them for $2,800. In 2019, Myspace accidentally deleted over 50 million digital files including photos, songs, and videos during a server migration. Everything up to 2015 was erased. In some ways that’s not the worst thing, considering some of the history left on older profiles. MySpace continues to push music today, with shows that include original content, like interviews with artists. It’s more of a way for artists to project their craft than a social network. It’s featured content, either sponsored by a label or artist, or from artists so popular or with such an intriguing story their label doesn’t need to promote them. There are elements of a social network left, but nothing like the other social networks of the day. And there’s some beauty in that simplicity. MySpace was always more than just a social networking website; it was the social network that kickstarted the web 2.0 experience we know today. Tom was everyone who joined the networks first friend. So he became the first major social media star. MySpace became the most visited social networking site in the world, often surpassing Google in number of visitors. Then the network effect moved elsewhere, and those who inherited the users analyzed what caused them to move away from MySpace and either through copying features, out innovating, or acquisition, have managed to remain dominant for over a decade. But there’s always something else right around the corner. One of the major reasons people abandoned MySpace was to be with those who thought just like them. When Facebook was only available to college kids it had a young appeal. It slowly leaked into the mainstream and my grandmother started typing the word like when I posted pictures of my kid. Because we grew up. They didn’t attempt to monetize too early. They remained stable. They didn’t spend more than they needed to keep the site going, so never lost control to investors. Meanwhile, MySpace grew to well over a thousand people to support a web property that would take a dozen to support today. Facebook may move fast and break things. But they do so because they saw what happens when we don’t.
5/14/202218 minutes, 15 seconds
Episode Artwork

Gateway 2000, and Sioux City

Theophile Bruguier was a fur trader who moved south out of Monreal after a stint as an attorney in Quebec before his fiancé died. He became friends with Chief War Eagle of the Yankton Sioux. We call him Chief, but he left the Santee rather than have a bloody fight over who would be the next chief. The Santee were being pushed down from the Great Lakes area of Minnesota and Wisconsin by the growing Ojibwe and were pushing further and further south. There are two main divisions of the Sioux people: the Dakota and the Lakota. There are two main ethnic groups of the Dakota, the Eastern, sometimes called the Santee and the Western, or the Yankton. After the issues with the his native Santee, he was welcomed by the Yankton, where he had two wives and seven children.  Chief War Eagle then spent time with the white people moving into the area in greater and greater numbers. They even went to war and he acted as a messenger for them in the War of 1812 and then became a messenger for the American Fur Company and a guide along the Missouri. After the war, he was elected a chief and helped negotiate peace treaties. He married two of his daughters off to Theophile Bruguier, who he sailed the Missouri with on trips between St Louis and Fort Pierre in the Dakota territory.  The place where Theophile settled was where the Big Sioux and Missouri rivers meet. Two water ways for trade made his cabin a perfect place to trade, and the chief died a couple of years later and was buried in what we now call War Eagle Park, a beautiful hike above Sioux City. His city. Around the same time, the Sioux throughout the Minnesota River were moved to South Dakota to live on reservations, having lost their lands and war broke out in the 1860s.  Back at the Bruguier land, more French moved into the area after Bruguier opened a trading post and was one of the 17 white people that voted in the first Woodbury County election, once Wahkaw County was changed to Woodbury to honor Levi Woodbury, a former Supreme Court Justice.  Bruguier sold some of his land to Joseph Leonais in 1852. He sold it to a land surveyor, Dr. John Cook, who founded Sioux City in 1854. By 1860, with the westward expansion of the US, the population had already risen to 400. Steamboats, railroads, livestock yards, and by 1880 they were over 7,000 souls, growing to 6 times that by the time Bruguier died in 1896. Seemingly more comfortable with those of the First Nations, his body is interred with Chief War Eagle and his first two wives on the bluffs overlooking Sioux City, totally unrecognizable by then. The goods this new industry brought had to cross the rivers. Before there were bridges to cross the sometimes angry rivers, ranchers had to ferry cattle across. Sometimes cattle fell off the barges and once they were moving, they couldn’t stop for a single head of cattle. Ted Waitt’s ancestors rescued cattle and sold them, eventually homesteading their own ranch. And that ranch is where Ted started Gateway Computers in 1985 with his friend Mike Hammond.  Michael Dell started Dell computers in 1984 and grew the company on the backs of a strong mail order business. He went from selling repair services and upgrades to selling full systems. He wasn’t the only one to build a company based on a mail and phone order business model in the 1980s and 1990s. Before the internet that was the most modern way to transact business.  Ted Waitt went to the University of Iowa in Iowa City a couple of years before Michael Dell went to the University of Texas. He started out in marketing and then spent a couple of years working for a reseller and repair store in Des Moines before he decided to start his own company. Gateway began life in 1985 as the Texas Instruments PC Network, or TIPC Network for short. They sold stuff for Texas Instruments computers like modems, printers, and other peripherals. The TI-99/4A had been released in 1979 and was discontinued a year before. It was a niche hobbyist market even by then, but the Texas Instruments Personal Computer had shipped in 1983 and came with an 8088 CPU. It was similar to an IBM PC and came with a DOS. But Texas Instruments wasn’t a clone maker and the machines weren’t fully Personal Computer compatible. Instead, there were differences.  They found some success and made more than $100,000 in just a few months, so brought in Tedd’s brother Norm. Compaq, Dell, and a bunch of other companies were springing up to build computers. Anyone who had sold parts for an 8088 and used DOS on it knew how to build a computer. And after a few years of supplying parts, they had a good idea how to find inexpensive components to build their own computers. They could rescue parts and sell them to meatpacking plants as full-blown computers. They just needed some Intel chips, some boards, which were pretty common by then, some RAM, which was dirt cheap due to a number of foreign companies dumping RAM into the US market. They built some computers and got up to $1 million in revenue in 1986. Then they became an IBM-compatible personal computer when they found the right mix of parts. It was close to what Texas Instruments sold, but came with a color monitor and two floppy disk drives, which were important in that era before all the computers came with spinning hard drives. Their first computer sold for just under $2,000, which made it half what a Texas Instruments computer cost. They found the same thing that Dell had found: the R&D and marketing overhead at big companies meant they could be more cost-competitive. They couldn’t call the computers a TIPC Network though. Sioux City, Iowa became the Gateway to the Dakotas, and beyond, so they changed their name to Gateway 2000.  Gateway 2000 then released an 80286, which we lovingly called the 286, in 1988 and finally left the ranch to move into the city. They also put Waitt’s marketing classes to use and slapped a photo of the cows from the ranch in a magazine that said “Computers from Iowa?” and one of the better tactics for long-term loyalty, they gave cash bonuses to employees based on their profits. Within a year, they jumped to $12 million in sales. Then $70 million in 1989, and moved to South Dakota in 1990 to avoid paying state income tax. The cow turned out to be popular, so they kept Holstein cows in their ads and even added them to the box. Everyone knew what those Gateway boxes looked like. Like Dell, they hired great tech support who seemed to love their jobs at Gateway and would help with any problems people found. They brought in the adults in 1990. Executives from big firms. They had been the first to Mae color monitors standard and now, with the release of Windows they became the first big computer seller to standardize on the platform.  They released a notebook computer in 1992. The HandBook was their first computer that didn’t do well. It could have been the timing, but in the midst of a recession in a time when most households were getting computers, a low cost computer sold well and sales hit $1 billion. Yet they had trouble scaling to their ship hundreds of computers a day. They opened an office in Ireland and ramped up sales overseas. Then they went public in 1993, raising $150 million. The Wiatt’s hung on to 85% of the company and used the capital raised in the IPO to branch into other areas to complete the Gateway offering: modems, networking equipment, printers, and more support representatives.  Sales in 1994 hit $2.7 billion a year. They added another support center a few hours down the Missouri River in Kansas City. They opened showrooms. They added a manufacturing plant in Malaysia. They bought Osborne Computer. They opened showrooms and by 1996 Gateway spent tens of millions a year in advertising. The ads worked and they became a household name. They became a top ten company in computing with $5 billion in sales. Dell was the only direct personal computer supplier who was bigger.  They opened a new sales channel: the World Wide Web. Many still called after they looked up prices at first but by 1997 they did hundreds of millions in sales on the web. By then, Ethernet had become the standard network protocol so they introduced the E-Series, which came with networks. They bought Advanced Logic Research to expand into servers. They launched a dialup provider called gateway.net.  By the late 1990s, the ocean of companies who sold personal computers was red. Anyone could head down to the local shop, buy some parts, and build their own personal computer. Dell, HP, Compaq, and others dropped their prices and Gateway was left needing a new approach. Three years before Apple opened their first store, Gateway launched Gateway Country, retail stores that sold the computer, the dialup service, and they went big fast, launching 58 stores in 26 states in a short period of time. With 2000 right around the corner, they also changed their name to Gateway, Inc. Price pressure continued to hammer away at them and they couldn’t find talent so they moved to San Diego.  1999 proved a pivotal year for many in technology. The run-up to the dot com bubble meant new web properties popped up constantly. AOL had more capital than they could spend and invested heavily into Gateway to take over the ISP business, which had grown to over half a million subscribers. They threw in free Internet access with the computers, opened more channels into different sectors, and expanded the retail stores to over 200. Some thought Waitt needed to let go and let someone with more executive experience come in. So long-time AT&T exec Jeff Weitzen, who had joined the company in 1998 took over as CEO. By then Waitt was worth billions and it made sense that maybe he could go run a cattle ranch. By then his former partner Mike Hammond had a little business fixing up cars so why not explore something new.  Waitt stayed on as chairman as Weitzen reorganized the company. But the prices of computers continued to fall. To keep up, Gateway released the Astro computer in 2000. This was an affordable, small desktop that had a built-in monitor, CPU, and speakers. It ran a 400 MHz Intel Celeron, had a CD-ROM, and a 4.3 GB hard drive, with 64 Megabytes of memory, a floppy, a modem, Windows 98 Second Edition, Norton Anti-Virus, USB ports, and the Microsoft Works Suite. All this came in at $799. Gateway had led the market with Windows and other firsts they jumped on board with. They had been aggressive. The first iMac had been released in 1998 and this seemed like they were following that with a cheaper computer. Gateway Country stores grew over 400+ stores. But the margins had gotten razor thin. That meant profits were down. Waitt came back to run the company, the US Securities and Exchange Commission filed charges for fraud against Weitzen, the former controller, and the former CFO, and that raged on for years. In that time, Gateway got into TVs, cameras, MP3 players, and in 2004 acquired eMachines, a rapidly growing economy PC manufacturer. Their CEO, Wayne Inouye then came in to run Gateway. He had been an executive at The Good Guys! and Best Buy before taking the helm of eMachines in 2001, helping them open sales channels in retail stores. But Gateway didn’t get as much a foothold in retail. That laptop failure from the 1980s stuck with Gateway. They never managed to ship a game-changing laptop. Then the market started to shift to laptops. Other companies left on that market but Gateway never seemed able to ship the right device. They instead branched into consumer electronics. The dot com bubble burst and they never recovered. The financial woes with the SEC hurt trust in the brand. The outsourcing hurt the trust in the brand. The acquisition of a budget manufacturer hurt the brand. Apple managed to open retail stores to great success, while preserving relationships with big box retailers. But Gateway lost that route to market when they opened their own stores. Then Acer acquired Gateway in 2007. They can now be found at Walmart, having been relaunched as a budget brand of Acer, a company who the big American firms once outsourced to, but who now stands on their own two feed as a maker of personal computers.
5/9/202218 minutes, 56 seconds
Episode Artwork

The WYSIWYG Web

4/29/202224 minutes, 37 seconds
Episode Artwork

Whistling Our Way To Windows XP

Microsoft had confusion in the Windows 2000 marketing and disappointment with Millennium Edition, which was built on a kernel that had run its course. It was time to phase out the older 95, 98, and Millennium code. So in 2001, Microsoft introduced Windows NT 5.1, known as Windows XP (eXperience). XP came in a Home or Professional edition.  Microsoft built a new interface they called Whistler for XP. It was sleeker and took more use of the graphics processors of the day. Jim Allchin was the Vice President in charge of the software group by then and helped spearhead development. XP had even more security options, which were simplified in the home edition. They did a lot of work to improve the compatibility between hardware and software and added the option for fast user switching so users didn’t have to log off completely and close all of their applications when someone else needed to use the computer. They also improved on the digital media experience and added new libraries to incorporate DirectX for various games.  Professional edition also added options that were more business focused. This included the ability to join a network and Remote Desktop without the need of a third party product to take control of the keyboard, video, and mouse of a remote computer. Users could use their XP Home Edition computer to log into work, if the network administrator could forward the port necessary. XP Professional also came with the ability to support multiple processors, send faxes, an encrypted file system, more granular control of files and other objects (including GPOs), roaming profiles (centrally managed through Active Directory using those GPOs), multiple language support, IntelliMirror (an oft forgotten centralized management solution that included RIS and sysprep for mass deployments), an option to do an Automated System Recovery, or ASR restore of a computer. Professional also came with the ability to act as a web server, not that anyone should run one on a home operating system. XP Professional was also 64-bit given the right processor. XP Home Edition could be upgraded to from Windows 98, Windows 98 Second Edition, Millineum, and XP Professional could be upgraded to from any operating system since Windows 98 was released., including NT 4 and Windows 2000 Professional. And users could upgrade from Home to Professional for an additional $100.   Microsoft also fixed a few features. One that had plagued users was that they had to gracefully unmount a drive before removing it; Microsoft got in front of this when they removed the warning that a drive was disconnected improperly and had the software take care of that preemptively. They removed some features users didn’t really use like NetMeeting and Phone Dialer and removed some of the themes options. The 3D Maze was also sadly removed. Other options just cleaned up the interface or merged technologies that had become similar, like Deluxe CD player and DVD player were removed in lieu of just using Windows Media Player. And chatty network protocols that caused problems like NetBEUI and AppleTalk were removed from the defaults, as was the legacy Microsoft OS/2 subsystem. In general, Microsoft moved from two operating system code bases to one. Although with the introduction of Windows CE, they arguably had no net-savings. However, to the consumer and enterprise buyer, it was a simpler licensing scheme. Those enterprise buyers were more and more important to Microsoft. Larger and larger fleets gave them buying power and the line items with resellers showed it with an explosion in the number of options for licensing packs and tiers. But feature-wise Microsoft had spent the Microsoft NT and Windows 2000-era training thousands of engineers on how to manage large fleets of Windows machines as Microsoft Certified Systems Engineers (MCSE) and other credentials. Deployments grew and by the time XP was released, Microsoft had the lions’ share of the market for desktop operating systems and productivity apps. XP would only cement that lead and create a generation of systems administrators equipped to manage the platform, who never knew a way other than the Microsoft way. One step along the path to the MCSE was through servers. For the first couple of years, XP connected to Windows 2000 Servers. Windows Server 2003, which was built on the Windows NT 5.2 kernel, was then released in 2003. Here, we saw Active Directory cement a lead created in 2000 over servers from Novell and other vendors. Server 2003 became the de facto platform for centralized file, print, web, ftp, software  time, DHCP, DNS, event, messeging, and terminal services (or shared Remote Desktop services through Terminal Server). Server 2003 could also be purchased with Exchange 2003. Given the integration with Microsoft Outlook and a number of desktop services, Microsoft Exchange.  The groupware market in 2003 and the years that followed were dominated by Lotus Notes, Novell’s GroupWise, and Exchange. Microsoft was aggressive. They were aggressive on pricing. They released tools to migrate from Notes to Exchange the week before IBM’s conference. We saw some of the same tactics and some of the same faces that were involved in Microsoft’s Internet Explorer anti-trust suit from the 1990s. The competition to Change never recovered and while Microsoft gained ground in the groupware space through the Exchange Server 4.0, 5.0, 5.5, 2000, 2003, 2007, 2010, 2013, and 2016 eras, by Exchange 2019 over half the mailboxes formerly hosted by on premises Exchange servers had moved to the cloud and predominantly Microsoft’s Office 365 cloud service. Some still used legacy Unix mail services like sendmail or those hosted by third party providers like GoDaddy with their domain or website - but many of those ran on Exchange as well. The only company to put up true competition in the space has been Google. Other companies had released tools to manage Windows devices en masse. Companies like Altiris sprang out of needs for companies who did third party software testing to manage the state of Windows computers. Microsoft had a product called Systems Management Server but Altiris built a better product, so Microsoft built an even more robust solution called System Center Configuration Management server, or SCCM for short, and within a few years Altiris lost so much business they were acquired by Symantec. Other similar stories played out across other areas where each product competed with other vendors and sometimes market segments - and usually won. To a large degree this was because of the tight hold Windows had on the market. Microsoft had taken the desktop metaphor and seemed to own the entire stack by the end of the Windows XP era. However, the technology we used was a couple of years after the product management and product development teams started to build it. And by the end of the XP era, Bill Gates had been gone long enough, and many of the early stars that almost by pure will pushed products through development cycles were as well. Microsoft continued to release new versions of the operating systems but XP became one of the biggest competitors to later operating systems rather than other companies. This reluctance to move to Vista and other technologies was the main reason extended support for XP through to 2012, around 11 years after it was released. 
4/25/202211 minutes, 31 seconds
Episode Artwork

Windows NT 5 becomes Windows 2000

Microsoft Windows 2000 was the successor to Windows NT 4.0, which had been released in 1997. Windows 2000 didn’t have a code name (supposedly because Jim Allchin didn’t like codenames), although its service packs did; Service Pack 1 and Windows 2000 64-bit were codenamed "Asteroid" and "Janus," respectively. 2000 began as NT 5.0 but Microsoft announced the name change in 1998, in a signal with when customer might expect the OS.  Some of the enhancements were just to match the look and feel of the consumer Windows 98 counterpart. For example, the logo in the boot screens was cleaned up and they added new icons.  Some found Windows 2000 to be more reliable, others claimed it didn’t have enough new features. But what it might have lacked in features from a cursory glance, Windows 2000 made up for in stability, scalability, and reliability.  This time around, Microsoft had input from some of their larger partners. They released the operating system to partners in 1999, after releasing three release candidates or developer previews earlier that year. They needed to, if only so third parties could understand what items needed to be sold to customers. There were enough editions now, that it wasn’t uncommon for resellers to have to call the licensing desk at a distributor (similar to a wholesaler for packaged goods) in order to figure out what line items the reseller needed to put on a bid, or estimate.  Reporters hailed it as the most stable product ever produced by Microsoft. It was also the most secure version. 2000 brought Group Policies forward from NT and enhanced what could be controlled from a central system. The old single line domain concept for managing domains was enhanced to become what Microsoft called Active Directory, a modern directory service that located resources in a database and allowed for finely grained controls of those resources. Windows 2000 also introduced NTFS 3, an Encrypted File System that was built on top of layers of APIs, each with their own controls.  Still, Windows 98 was the most popular operating system in the world by then and it was harder to move people to it than initially expected. Microsoft released Windows 98 Second Edition in 1999 and then Windows Millennium Edition, or Me, in 2000. Millennium was a flop and helped move more people into 2000, even though 2000 was marketed as a business or enterprise operating system.  Windows 2000 Professional was the workstation workhorse. Active Directory and other server services ran on Windows 2000 Server Edition. They also released Advanced Server and Datacenter Server for even more advanced environments, with Datacenter able to support up to 32 CPUs. Professional borrowed many features from both NT and 98 Second Edition, including the Outlook Express email client, expanded file system support, WebDAV support, Windows Media Player, WDM (Windows Driver Model), the Microsoft Management Console (MMC) for making it easier to manage those GPOs, support for new mass storage devices like Firewire, hibernation and passwords to wake up from hibernation, the System File Checker, new debugging options, better event logs, Windows Desktop Update (which gave us “Patch Tuesday”), a new Windows Installer, Windows Management Instrumentation (WMI), Plug and Play hardware (installing new hardware in Windows NT was a bit more like doing so in Unix than Windows 95), and all the transitions and animations of the Windows shell like an Explorer integrated with Internet Explorer.  Some of these features were abused. We got Code Red, Nimbda, and other malware that became high profile attacks against vulnerable binaries. These were unprecedented in terms of how quickly a flaw in the code could get abused en masse. Hundreds of thousands of computers could be infected in a matter of days with a well crafted exploit. Even some of the server services were exploited such as the IIS, or Internet Information Services server. Microsoft responded with security bulletins but buffer overflows and other vulnerabilities allows mass infections. So much so that the US and other governments got involved. This wasn’t made any easier by the fact that the source code for parts of 2000 was leaked on the Internet and had been used to help find new exploits. Yet Windows 2000 was still the most secure operating system Microsoft had put out. Imagine how many viruses and exploits would have appeared on all those computers if it hadn’t of been. And within Microsoft, Windows 2000 was a critical step toward mass adoption of the far more stable, technically sophisticated Windows NT platform. It demonstrated that a technologically powerful Windows operating system could also have a user-friendly interface and multimedia capabilities.
4/17/20227 minutes, 53 seconds
Episode Artwork

The R Programming Language

R is the 18th level of the Latin alphabet. It represents the rhotic consonant, or the r sound. It goes back to the Greek Rho, the Phoenician Resh before that and the Egyptian rêš, which is the same name the Egyptians had for head, before that. R appears in about 7 and a half percent of the words in the English dictionary.  And R is probably the best language out there for programming around various statistical and machine learning tasks. We may use tools like Tensorflow imported to languages like python to prototype but R is incredibly performant for all the maths. And so it has become an essential piece of software for data scientists.  The R programming language was created in 1993 by two statisticians Robert Gentleman, and Ross Ihaka at the University of Auckland, New Zealand. It has since been ported to practically every operating system and is available at r-project.org. Initially called "S," the name changed to "R" to avoid a trademark issue with a commercial software package that we’ll discuss in a bit. R was primarily written in C but used Fortran and since even R itself.  And there have been statistical packages since the very first computers were used for math.  IBM in fact packaged up BMDP when they first started working on the idea at UCLA Health Computing Facility. That was 1957. Then came SPSS out of the University of Chicago in 1968. And the same year, John Sall and others gave us SAS, or Statistical Analysis System) out of North Carolina State University. And those evolved from those early days through into the 80s with the advent of object oriented everything and thus got not only windowing interfaces but also extensibility, code sharing, and as we moved into the 90s, acquisition’s. BMDP was acquired by SPSS who was then acquired by IBM and the products were getting more expensive but not getting a ton of key updates for the same scientific and medical communities. And so we saw the upstarts in the 80s, Data Desk and JMP and others. Tools built for windowing operating systems and in object oriented languages. We got the ability to interactively manipulate data, zoom in and spin three dimensional representations of data, and all kinds of pretty aspects. But they were not a programmers tool. S was begun in the seventies at Bell Labs and was supposed to be a statistical MATLAB, a language specifically designed for number crunching. And the statistical techniques were far beyond where SPSS and SAS had stopped. And with the breakup of Ma Bell, parts of Bell became Lucent, which sold S to Insightful Corporation who released S-PLUS and would later get bought by TIBCO. Keep in mind, Bell was testing line quality and statistics and going back to World War II employed some of the top scientists in those fields, ones who would later create large chunks of the quality movement and implementations like Six Sigma. Once S went to a standalone software company basically, it became less about the statistics and more about porting to different computers to make more money.  Private equity and portfolio conglomerates are, by nature, after improving the multiples on a line of business. But sometimes more statisticians in various feels might feel left behind. And this is where R comes into the picture. R gained popularity among statisticians because it made it easier to write complicated statistical algorithms without learning an entire programming language. Its popularity has grown significantly since then. R has been described as a cross between MATLAB and SPSS, but much faster. R was initially designed to be a language that could handle statistical analysis and other types of data mining, an offshoot of which we now call machine learning. R is also an open-source language and as with a number of other languages has plenty of packages available through a package repository - which they call CRAN (Comprehensive R Archive Network). This allows R to be used in fields outside of statistics and data science or to just get new methods to do math that doesn’t belong in the main language.  There are over 18,000 packages for R. One of the more popular is ggplot2, an open-source data visualization package. data.table is another that performs programmatic data manipulation operations. dplyr provides functions designed to enable data frame manipulation in an intuitive manner. tidyr helps create tidier data. Shiny generates interactive web apps. And there are plenty of packages to make R easier, faster, and more extensible. By 2015, more than 10 million people used R every month and it’s now the 13th most popular language in use. And the needs have expanded. We can drop r scripts into other programs and tools for processing. And some of the workloads are huge. This led to the development of parallel computing, specifically using MPI (Message Passing Interface).  R programming is one of the most popular languages used for statistical analysis, statistical graphics generation, and data science projects. There are other languages or tools for specific uses but it’s even started being used in those.  The latest version, R 4.1.2, was released on 21/11/01. R development, as with most thriving open source solutions, is guided by a group of core developers supported by contributions from the broader community. It became popular because it provides all essential features for data mining and graphics needed for academic research and industry applications and because of the pluggable and robust and versatile nature. And projects like tensorflow and numpy and sci-kit have evolved for other languages. And there are services from companies like Amazon that can host and process assets from both, both using unstructured databases like NoSQL or using Jupyter notebooks. A Jupyter Notebook is a JSON document, following a versioned schema that contains an ordered list of input/output cells which can contain code, text (using Markdown), formulas, algorithms, plots and even media like audio or video. Project Jupyter was a spin-off of iPython but the goal was to create a language-agnostic tool where we could execute aspects in Ruby or Haskel or Python or even R. This gives us so many ways to get our data into the notebook, in batches or deep learning environments or whatever pipeline needs to be built based on an organization’s stack. Especially if the notebook has a frontend based on Amazon SageMaker Notebooks, Google's Colaboratory and Microsoft's Azure Notebook. Think about this. 25% of the languages lack a rhotic consonant. Sometimes it seems like we’ve got languages that do everything or that we’ve built products that do everything. But I bet no matter the industry or focus or sub-specialty, there’s still 25% more automation or instigation into our own data to be done. Because there always will be.
4/1/202210 minutes, 50 seconds
Episode Artwork

The Earliest Days of Microsoft Windows NT

The first operating systems as we might think of them today (or at least anything beyond a basic task manager) shipped in the form of Multics in 1969. Some of the people who worked on that then helped created Unix at Bell Labs in 1971. Throughout the 1970s and 1980s, Unix flowed to education, research, and corporate environments through minicomputers and many in those environments thought a flavor of BSD, or Berkeley Software Distribution, might become the operating system of choice on microcomputers. But the microcomputer movement had a while other plan if only in spite of the elder minicomputers. Apple DOS was created in 1978 in a time when most companies who made computers had to mail their own DOS as well, if only so software developers could built disks capable of booting the machines. Microsoft created their Disk Operating System, or MS-DOS, in 1981. They proceeded to Windows 1 to sit on top of MS-DOS in 1985, which was built in Intel’s 8086 assembler and called operating system services via interrupts. That led to poor programmers locking down points in order to access memory addresses and written assuming a single-user operating system. Then came Windows 2 in 1987, Windows 3 in 1992, and released one of the most anticipated operating systems of all time in 1995 with Windows 95. 95 turned into 98, and then Millineum in 2000. But in the meantime, Microsoft began work on another generation of operating systems based on a fusion of ideas between work they were doing with IBM, work architects had done at Digital Equipment Corporation (DEC), and rethinking all of it with modern foundations of APIs and layers of security sitting atop a kernel. Microsoft worked on OS/2 with IBM from 1985 to 1989. This was to be the IBM-blessed successor of the personal computer. But IBM was losing control of the PC market with the rise of cloned IBM architectures. IBM was also big, corporate, and the small, fledgeling Microsoft was able to move quicker. Really small companies that find success often don’t mesh well with really big companies that have layers of bureaucracy. The people Microsoft originally worked with were nimble and moved quickly. The ones presiding over the massive sales and go to market efforts and the explosion in engineering team size was back to the old IBM. OS/2 had APIs for most everything the computer could do. This meant that programmers weren’t just calling assembly any time they wanted and invading whatever memory addresses they wanted. They also wanted preemptive multitasking and threading. And a file system since by then computers had internal hard drives. The Microsoft and IBM relationship fell apart and Microsoft decided to go their own way. Microsoft realized that DOS was old and building on top of DOS was going to some day be a big, big problem. Windows 3 was closer, as was 95, so they continued on with that plan. But they started something similar to what we’d call a fork of OS/2 today. So Gates went out to recruit the best in the industry. He hired Dave Cutler from Digital Equipment to take on the architecture of the new operating system. Cutler had worked on the VMS operating system and helped lead efforts for next-generation operating system at DEC that they called MICA. And that moment began the march towards a new operating system called NT, which borrowed much of the best from VMS, Microsoft Windows, and OS/2 - and had little baggage. Microsoft was supposed to make version 3 of OS/2 but NT OS/2 3.0 would become just Windows NT when Microsoft stopped developing on OS/2. It took 12 years, because um, they had a loooooot of customers after the wild success of first Windows 3 and then Windows 95, but eventually Cutler and team’s NT would replace all other operating systems in the family with the release of Windows 2000. Cutler wanted to escape the confines of what was by then the second largest computing company in the world. Cutler worked on VMS and RSX-12 before he got to Microsoft. There were constant turf battles and arguments about microkernels and system architecture and meetings weren’t always conducive with actually shipping code. So Cutler went somewhere he could. At least, so long as they kept IBM at bay. Cutler brought some of the team from Digital with him and they got to work on that next generation of operating systems in 1988. They sat down to decide what they wanted to build, using the NS OS/2 operating system they had a starting point. Microsoft had sold Xenix and the team knew about most every operating system on the market at the time. They wanted a multi-user environment like a Unix. They wanted programming APIs, especially for networking, but different than what BSD had. In fact, many of the paths and structures of networking commands in Windows still harken back to emulating those structures. The system would be slow on the 8086 processor, but ever since the days of Xerox PARC, everyone knew Moore’s Law was real and that the processors would double in speed every other year. Especially since Moore was still at Intel and could make his law remain true with the 286 and 386 chips in the pipeline. They also wanted the operating system to be portable since IBM selected the Intel CPU but there were plenty of other CPU architectures out there as well. The original name for NT was to be OS/2 3.0. But the IBM and Microsoft relationship fell apart and the two companies took their operating systems in different directions. OS/2 became went the direction of Warp and IBM never recovered. NT went in a direction where some ideas came over from Windows 95 or 3.1 but mostly the team just added layers of APIs and focused on making NT a fully 32-bit version of Windows that could that could be ported to other platforms including ARM, PowerPC, and the DEC Alpha that Cutler had exposure to from his days at Digital. The name became Windows NT and NT began with version 3, as it was in fact the third installment of OS/2. The team began with Cutler and a few others, grew to eight and by the time it finally shipped as NT 3.1 in 1993 there were a few hundred people working on the project. Where Windows 95 became the mass marketed operating system, NT took lessons learned from the Unix, IBM mainframe, and VMS worlds and packed them into an operating system that could run on a corporate desktop computer, as microcomputers were called by then. The project cost $150 million, about the same as the first iPhone. It was a rough start. But that core team and those who followed did what Apple couldn’t in a time when a missing modern operating system nearly put Apple out of business. Cutler inspired, good managers drove teams forward, some bad managers left, other bad managers stayed, and in an almost agile development environment they managed to break through the conflicts and ship an operating system that didn’t actually seem like it was built by a committee. Bill Gates knew the market and was patient enough to let NT 3 mature. They took the parts of OS/2 like LAN Manager. They took parts of Unix like ping. But those were at the application level. The microkernel was the most important part. And that was a small core team, like it always is. The first version they shipped to the public was Windows NT 3.1. The sales people found it easiest to often say that NT was the business-oriented operating system. Over time, the Windows NT series was slowly enlarged to become the company’s general-purpose OS product line for all PCs, and thus Microsoft abandoned the Windows 9x family, which might or might not have a lot to do with the poor reviews Millennium Edition had. Other aspects of the application layer the original team didn’t do much with included the GUI, which was much more similar to Windows 3.x. But based on great APIs they were able to move faster than most, especially in that era where Unix was in weird legal territory, changing hands from Bell to Novell, and BSD was also in dubious legal territory. The Linux kernel had been written in 1991 but wasn’t yet a desktop-class operating system. So the remaining choices most business considered were really Mac, which had serious operating system issues at the time and seemed to lack a vision since Steve Jobs left the company, or Windows. Windows NT 3.5 was introduced in 1994, followed by 3.51 a year later. During those releases they shored up access control lists for files, functions, and services. Services being similar in nearly every way to a process in Unix. It sported a TCP/IP network stack but also NetBIOS for locating computers to establish a share and a file sharing stack in LAN Manager based on the Server Message Block, or SMB protocol that Barry Feigenbaum wrote at IBM in 1983 to turn a DOS computer into a file server. Over the years, Microsoft and 3COM add additional functionality and Microsoft added the full Samba with LDAP out of the University of Michigan as a backend and Kerberos (out of MIT) to provide single sign-on services. 3.51 also brought a lot of user-mode components from Windows 95. That included the Windows 95 common control library, which included the rich edit control, and a number of tools for developers. NT could run DOS software, now they were getting it to run Windows 95 software without sacrificing the security of the operating system where possible. It kinda’ looked like a slightly more boring version of 95. And some of the features were a little harder to use, like configuring a SCSI driver to get a tape drive to work. But they got the ability to run Office 95 and it was the last version that ran the old Program Manager graphical interface. Cutler had been joined by Moshe Dunie, who led the management side of NT 3.1, through NT 4 and became the VP of the Windows Operating System Division so also had responsibility for Windows 98 and 2000. For perspective, that operating system group grew to include 3,000 badged Microsoft employees and about half that number of contractors. Mark Luovsky and Lou Perazzoli joined from Digital. Jim Alchin came in from Banyan Vines. Windows NT 4.0 was released in 1996, with a GUI very similar to Windows 95. NT 4 became the workhorse of the field that emerged for large deployments of computers we now refer to as enterprise computing. It didn’t have all the animation-type bells and whistles of 95 but did perform about as well as any operating system could. It had the NT Explorer to browse files, a Start menu, for which many of us just clicked run and types cmd. It had a Windows Desktop Update and a task scheduler. They released a number of features that would take years for other vendors to catch up with. The DCOM, or Distributed Component Object Modeling and Object Linking & Embedding (or OLE) was a core aspect any developer had to learn. The Telephony API (or TAPI) allowed access to the modem. The Microsoft Transaction Server allowed developers to build network applications on their own sockets. The Crypto API allowed developers to encrypt information in their applications. The Microsoft Message Queuing service allowed queuing data transfer between services. They also built in DirectX support and already had OpenGL support. The Task Manager in NT 4 was like an awesome graphical version of the top command on Unix. And it came with Internet Explorer 2 built in. NT 4 would be followed by a series of service packs for 4 years before the next generation of operating system was ready. That was Windows 5, or more colloquially called Windows 2000. In those years NT became known as NT Workstation, the server became known as NT Server, they built out Terminal Server Edition in collaboration with Citrix. And across 6 service packs, NT became the standard in enterprise computing. IBM released OS/2 Warp version 4.52 in 2001, but never had even a fraction of the sales Microsoft did. By contrast, NT 5.1 became Windows XP and 6 became Vista in while OS/2 was cancelled in 2005.
3/24/202217 minutes, 55 seconds
Episode Artwork

Qualcomm: From Satellites to CDMA to Snapdragons

Qualcomm is the world's largest fabless semiconductor designer. The name Qualcomm is a mashup of  Quality and Communications and communications has been a hallmark of the company since its founding. They began in satellite communications and today most every smartphone has a Qualcomm chip. The ubiquity of communications in our devices and everyday lives has allowed them a $182 billion market cap as of the time of this writing.  Qualcomm began with far humbler beginnings. They emerged out of a company called Linkabit in 1985. Linkabit was started by Irwin Jacobs, Leonard Kleinrock, and Andrew Viterbi - all three former graduate students at MIT.  Viterbi moved to California to take a job with JPL in Pasadena, where he worked on satellites. He then went off to UCLA where he developed what we now call the Viterti algorithm, for encoding and decoding digital communications. Jacobs worked on a book called Principles of Communication Engineering after getting his doctorate at MIT. Jacobs then took a year of leave to work at JPL after he met Viterbi in the early 1960s and the two hit it off. By 1966, Jacobs was a professor at the University of California, San Diego. Kleinrock was at UCLA by then and the three realized they had too many consulting efforts between them, but if they consolidated the request they could pool their resources. Eventually Jacobs and Viterbi left and Kleinrock got busy working on the first ARPANET node when it was installed at UCLA. Jerry Heller, Andrew Cohen, Klein Gilhousen, and James Dunn eventually moved into the area to work at Linkabit and by the 1970s Jacobs was back to help design telecommunications for satellites. They’d been working to refine the theories from Claude Shannon’s time at MIT and Bell Labs and were some of the top names in the industry on the work. And the space race needed a lot of this type of work. They did their work on Scientific Data Systems computers in an era before that company was acquired by Xerox. Much as Claude Shannon got started thinking of data loss as it pertains to information theory while trying to send telegraphs over barbed wire, they refined that work thinking about sending images from mars to earth.  Others from MIT worked on other space projects as a part of missions. Many of those early employees were Viterbi’s PhD students and they were joined by Joseph Odenwalder, who took Viterbi’s decoding work and combined it with a previous dissertation out of MIT when he joined Linkabit. That got used in the Voyager space probes and put Linkabit on the map. They were hiring some of the top talent in digital communications and were able to promote not only being able to work with some of the top minds in the industry but also the fact that they were in beautiful San Diego, which appealed to many in the Boston or MIT communities during harsh winters. As solid state electronics got cheaper and the number of transistors more densely packed into those wafers, they were able to exploit the ability to make hardware and software for military applications by packing digital signal processors that had previously taken a Sigma from SDS into smaller and smaller form factors, like the Linkabit Microprocessor, which got Viterbi’s algorithm for encoding data into a breadboard and a chip.  The work continued with defense contractors and suppliers. They built modulation and demodulation for UHF signals for military communications. That evolved into a Command Post Modem/Processor they sold, or CPM/P for short. They made modems for the military in the 1970s, some of which remained in production until the 1990s. And as they turned their way into the 1980s, they had more than $10 million in revenue.  The UC San Diego program grew in those years, and the Linkabit founders had more and more local talent to choose from. Linkabit developed tools to facilitate encoded communications over commercial satellites as well. They partnered with companies like IBM and developed smaller business units they were able to sell off. They also developed a tool they called VideoCipher to encode video, which HBO and others used to do what we later called scrambling on satellite signals. As we rounded the corner into the 1990s, though, they turned their attention to cellular services with TDMA (Time-Division Multiple Access), an early alternative to CDMA. Along the way, Linkabit got acquired by a company called MACOM in 1980 for $25 million. The founders liked that the acquirer was a fellow PhD from MIT and Linkabit stayed separate but grew quickly with the products they were introducing. As with most acquisitions, the culture changed and by 1985 the founders were gone. The VideoCipher and other units were sold off, spun off, or people just left and started new companies. Information theory was decades old at this point, plenty of academic papers had been published, and everyone who understood the industry knew that digital telecommunications was about to explode; a perfect storm for defections. Qualcomm Over the course of the next few years over two dozen companies were born as the alumni left and by 2003, 76 companies were founded by Linkabit alumni, including four who went public. One of the companies that emerged included the Linkabit founders Irwin Jacobs and Andrew Viterbi, Begun in 1985, Qualcomm is also based in San Diego. The founders had put information theory into practice at Linkabit and seen that the managers who were great at finance just weren’t inspiring to scientists.  Qualcomm began with consulting and research, but this time looked for products to take to market. They merged with a company called Omninet and the two released the OmniTRACS satellite communication system for trucking and logistical companies. They landed Schneider National and a few other large customers and grew to over 600 employees in those first five years. It remained a Qualcomm subsidiary until recently. Even with tens of millions in revenue, they operated at a loss while researching what they knew would be the next big thing.  Code-Division Multiple Acces, or CDMA, is a technology that allows for sending information over multiple channels so users can share not just a single frequency of the radio band, but multiple frequencies without a lot of interference. The original research began all the way back in the 1930s when Dmitry Ageyev in the Soviet Union researched the theory of code division of signals at Leningrad Electrotechnical Institute of Communications. That work and was furthered during World War II by German researchers like Karl Küpfmüller and Americans like Claude Shannon, who focused more on the information theory of communication channels.  People like Lee Yuk-wing then took the cybernetics work from pioneers like Norbert Weiner and helped connect those with others like Qualcomm’s Jacobs, a student of Yuk-wing’s when he was a professor at MIT. They were already working on CDMA jamming in the early 1950s at MIT’s Lincoln Lab. Another Russian named Leonid Kupriyanovich put the concept of CMDA into practice in the later 1950s so the Soviets could track people using a service they called Altai. That made it perfect for  perfect for tracking trucks and within a few years was released in 1965 as a pre-cellular radiotelephone network that got bridged to standard phone lines. The Linkabit and then Qualcomm engineers had worked closely with satellite engineers at JPL then Hughes and other defense then commercial contractors. They’d come in contact with work and built their own intellectual property for decades. Bell was working on mobile, or cellular technologies. Ameritech Mobile Communications, or Advanced Mobile Phone System (AMPS) as they were known at the time, launched the first 1G network in 1983 and Vodaphone launched their first service in the UK in 1984. Qualcomm filed their first patent for CDMA the next year.  That patent is one of the most cited documents in all of technology. Qualcomm worked closely with the Federal Communications Commission (FCC) in the US and with industry consortiums, such as the CTIA, or Cellular Telephone Industries Association. Meanwhile Ericsson promoted the TDMA standard as they claimed it was more standard; however, Qualcomm worked on additional patents and got to the point that they licensed their technology to early cell phone providers like Ameritech, who was one of the first to switch from the TDMA standard Ericsson promoted to CDMA. Other carriers switched to CDMA as well, which gave them data to prove their technology worked. The OmniTRACS service helped with revenue, but they needed more. So they filed for an initial public offering in 1991 and raised over $500 billion in funding between then and 1995 when they sold another round of shares. By then, they had done the work to get CDMA encoding on a chip and it was time to go to the mass market. They made double what they raised back in just the first two years, reaching over $800 million in revenue in 1996.  Qualcomm and Cell Phones One of the reasons Qualcomm was able to raise so much money in two substantial rounds of public funding is that the test demonstrations were going so well. They deployed CDMA in San Diego, New York, Honk Kong, Los Angeles, and within just a few years had over a dozen carriers running substantial tests. The CTIA supported CDMA as a standard in 1993 and by 1995 they went from tests to commercial networks.  The standard grew in adoption from there. South Korea standardized on CDMA between 1993 to 116. The CDMA standard was embraced by Primeco in 1995, who used the 1900 MHz PCS band. This was a joint venture between a number of vendors including two former regional AT&T spin-offs from before the breakup of AT&T and represented interests from Cox Communications, Sprint, and turned out to be a large undertaking. It was also the largest cellular launch with services going live in 19 cities and the first phones were from a joint venture between Qualcomm and Sony. Most of PrimeCo’s assets were later merged with AirTouch Cellular and the Bell Atlantic Mobile to form what we now know as Verizon Wireless.  Along the way, there were a few barriers to mass proliferation of the Qualcomm CDMA standards. One is that they made phones. The Qualcomm Q cost them a lot to manufacture and it was a market with a lot of competition who had cheaper manufacturing ecosystems. So Qualcomm sold the manufacturing business to Kyocera, who continued to license Qualcomm chips. Now they could shift all of their focus on encoding bits of data to be carried over multiple radio channels to do their part in paving the way for 2G and 3G networks with the chips that went into most phones of the era.  Qualcomm couldn’t have built out a mass manufacturing ecosystem to supply the world with every phone needed in the 2G and 3G era. Nor could they make the chips that went in those phones. The mid and late 1990s saw them outsource then just license their patents and know-how to other companies. A quarter of a billion 3G subscribers across over a hundred carriers in dozens of countries. They got in front of what came after CDMA and worked on multiple other standards, including OFDMA, or Orthogonal frequency-Division Multiple Access. For those they developed the Qualcomm Flarion Flash-OFDM and 3GPP 5G NR, or New Radio. And of course a boatload of other innovative technologies and chips. Thus paving the way to have made Qualcomm instrumental in 5G and beyond.  This was really made possible by this hyper-specialization. Many of the same people who developed the encoding technology for the Voyager satellite decades prior helped pave the way for the mobile revolution. They ventured into manufacturing but as with many of the designers of technology and chips, chose to license the technology in massive cross-licensing deals. These deals are so big Apple sued Qualcomm recently for a billion in missed rebates. But there were changes happening in the technology industry that would shake up those licensing deals.  Broadcom was growing into a behemoth. Many of their designs sent from stand-alone chips to being a small part of a SoC, or system on a chip. Suddenly, cross-licensing the ARM gave Qualcomm the ability to make full SoCs.  Snapdragon has been the moniker of the current line of SoCs since 2007. Qualcomm has an ARM Architectural License and uses the ARM instruction set to create their own CPUs. The most recent incarnation is known as Krait. They also create their own Graphics Processor (GPU) and Digital Signal Processors (DSPs) known as Adreno and Hexagon. They recently acquired Arteris' technology and engineering group, and they used Arteris' Network on Chip (NoC) technology. Snapdragon chips can be found in the Samsung Galaxy, Vivo, Asus, and Xiaomi phones. Apple designs their own chips that are based on the ARM architecture, so in some ways compete with the Snapdragon, but still use Qualcomm modems like every other SoC. Qualcomm also bought a new patent portfolio from HP, including the Palm patents and others, so who knows what we’ll find in the next chips - maybe a chip in a stylus.  Their slogan is "enabling the wireless industry," and they’ve certainly done that. From satellite communications that required a computer the size of a few refrigerators to battlefield communications to shipping trucks with tracking systems to cell towers, and now the full processor on a cell phone. They’ve been with us since the beginning of the mobile era and one has to wonder if the next few generations of mobile technology will involve satellites, so if Qualcomm will end up right back where they began: encoding bits of information theory into silicon.
3/17/202228 minutes, 55 seconds
Episode Artwork

The Short But Sweet History Of The Go Programming Language

The Go Programming Language Go is an open-source programming language with influences from Limbo, C, APL, Modular, Oberon, Pascal, Alex, Erlang, and most importantly, C. While relatively young compared to many languages, there are over 365,000 repositories of Go projects on Github alone. There are a few reason it gained popularity so quickly: it’s fast and efficient in the right hands, simple to pick up, doesn’t have some of the baggage of some more mature languages, and the name Ken Thompson. The seamless way we can make calls from Go into C and the fact that Ken Thompson was one of the parties responsible for C, makes it seem in part like a modern web enabled language that can stretch between the tasks C is still used for all the way to playing fart sounds in an app. And it didn’t hurt that co-author Rob Pike had whelped write books, co-created UTF-8, and was part of the distributed operating system Plan 9  team at Bell Labs and had worked on the Limbo programming language there.  And Robert Griesemer was another co-author. He’d begun his career studying under Niklaus Wirth, the greater of Pascal, Modula, and Oberon. So it’s no surprise that he’d go on to write compilers and design languages. Before go, he’d worked on the V8 JavaScript engine at Google and a compiler for the Java HotSpot Virtual Machine. So our intrepid heroes assembled (pun intended) at Google in 2009. But why? Friends don’t let friends write in C. Thompson had done something amazing for the world with C. But that was going on 50 years ago. And others had picked up the mantle with C++. But there were shortcomings the team wanted to address. And so Go has the ability to concatenate string variables without using a preprocessor, has many similarities to languages like BASIC from the Limbo influences, but the most impressive feature about this programming language is its support for concurrent execution. And probably the best garbage collection facility I’ve ever seen.  The first version of the language wasn't released to the public and wouldn’t be for a few years. The initial compiler was written in C but over time they got to where it can be self-hosted, which is to say that Go is compiled in Go.  Go is a compiled language that can run on a command line, in a browser, on the server, or even be used to compile itself. Go compiles fast and has no global variables to clutter memory. This simplicity makes it easy to read through Go code line by line without consulting any parsing tools or syntax charts. Let’s look at a quick Hello World: // A basic Go program that demonstrates "Hello World!"
package main
import "fmt"
func main() {
    fmt.Println("Hello World!")
} The output would be a simple Hello World! Fairly straight forward but the power gets into more of the scripting structures - especially given that a micro service is just a lot of little functional scripts. The language itself has no connection to any other functional programming languages and does not include support for object orientation or reflection. The language consists of two parts: a parser (which processes an input file) and a bytecode interpreter, which translates all source code into machine code. Consequently, Go programs tend to compile quickly and run very efficiently because they are mainly independent of the runtime environment and can execute directly on the hardware without being interpreted by some sort of virtual machine first. Additionally, there is no need for a separate interpreter during execution since everything runs natively. The libraries and sources built using the Go programming language provide developers with a straightforward, safe, and extensibility system to build on. We have things like Go Kit, GORM, cli, Vegeta, fuzzy, Authboss, Image, Time, gg, and mgo. These can basically provide pre-built functions and APIs to hook into any old type of service or give a number of things for free. Go was well designed from the outset and while it’s evolved over the years, it hasn’t changed as much as many other languages. with the latest release being Go 1.17. 1.1 came just a couple of months after the initial release to increase how much memory could be used on 64 bit chips by about 10-fold, add detection for race conditions, added the uint for 64 bit integers. Oh and fixed a couple of issues in the compiler. 1.2 also came in 2013 and tweaked how slicing of arrays worked in a really elegant way (almost ruby-like) and allowed developers to call the runtime scheduler for non-inline calls. And added a thread limit, like the ulimit a bash would have, for 10,000 threads. And they doubled the grouting minimum size of the stack.  Then the changes got smaller. This happens as every language gets more popular. The more people use it, the more havoc the developers cause when they make breaking changes. Bigger changes are contiguous models of grouting stacks in 1.3, the addition of internal packages in 1.4, a redesigned garbage collector in 1.5 when Go was moved away from C and implemented solely in Go and assembler. And 17 releases later, it’s more popular than ever. While C remains the most popular language today, Go is hovering in the top 10. Imagine, one day saying let’s build a better language for concurrent programming. And then viola; hundreds of thousands of people are using it. 
3/13/20229 minutes, 33 seconds
Episode Artwork

awk && Regular Expressions For Finding Text

Programming was once all about math. And life was good. Then came strings, or those icky non-numbery things. Then we had to process those strings. And much of that is looking for patterns that wouldn’t be a need with integers, or numbers. For example, a space in a string of text. Let’s say we want to print hello world to the screen in bash. That would be the echo command, followed by “Hello World!” Now let’s say we ran that without the quotes then it would simply echo out the word Hello to the screen, given that the interpreter saw the space and ended the command, or looked for the next operator or verb according to which command is being used. Unix was started in 1969 at Bell Labs. Part of that work was The Thompson shell, the first Unix shell, which shipped in 1971. And C was written in 1972. These make up the ancestral underpinnings of the modern Linux, BSD, Android, Chrome, iPhone, and Mac operating systems. A lot of the work the team at Bell Labs was doing was shifting from pure statistical and mathematical operations to connect phones and do R&D faster to more general computing applications. Those meant going from math to those annoying stringy things. Unix was an early operating system and that shell gave them new abilities to interact with the computer. People called files funny things. There was text in those files. And so text manipulation became a thing. Lee McMahon developed sed in 1974, which was great for finding patterns and doing basic substitutions. Another team  at Bell Labs that included Finnish programmer Alfred Aho, Peter Weinberger, and Brian Kernighan had more advanced needs. Take their last name initials and we get awk. Awk is a programming language they developed in 1977 for data processing, or more specifically for text manipulation. Marc Rochkind had been working on a version management tool for code at Bell and that involved some text manipulation, as well as a good starting point for awk.  It’s meant to be concise and given some input, produce the desired output. Nice, short, and efficient scripting language to help people that didn’t need to go out and learn C to do some basic tasks. AWK is a programming language with its own interpreter, so no need to compile to run AWK scripts as executable programs.  Sed and awk are both written to be used as one0line programs, or more if needed. But building in an implicit loops and implicit variables made it simple to build short but power regular expressions. Think of awk as a pair of objects. The first is a pattern followed by an action to take in curly brackets. It can be dangerous to call if the pattern is too wide open.; especially when piping information For example,  ls -al at the root of a volume and piping that to awk $1 or some other position and then piping that into xargs to rm and a systems administrator could have a really rough day. Those $1, $2, and so-on represent the positions of words. So could be directories.  Think about this, though. In a world before relational databases, when we were looking to query the 3rd column in a file with information separated by some delimiter, piping those positions represented a simple way to effectively join tables of information into a text file or screen output. Or to find files on a computer that match a pattern for whatever reason.  Awk began powerful. Over time, improvements have enabled it to be used in increasingly  complicated scenarios. Especially when it comes to pattern matching with regular expressions. Various coding styles for input and output have been added as well, which can be changed depending on the need at hand.  Awk is also important because it influenced other languages. After becoming part of the IEEE Standard 1003.1, it is now a part of the POSIX standard. And after a few years, Larry Wall came up with some improvements, and along came Perl. But the awk syntax has always been the most succinct and useable regular expression engines. Part of that is the wildcard, piping, and file redirection techniques borrowed from the original shells. The AWK creators wrote a book called The AWK Programming Language for Addison-Wesley in 1988. Aho would go on to develop influential algorithms, write compilers, and write books (some of which were about compilers). Weinberger continued to do work at Bell before becoming the Chief Technology Officer of Hedge Fund Renaissance Technologies with former code breaker and mathematician James Simon and Robert Mercer. His face led to much love from his coworkers at Bell during the advent of digital photography and hopefully some day we’ll see it on the Google Search page, given he now works there.  Brian Kernighan was a contributor to the early Multics then Unix work, as well as C. In fact, an important C implementation, K&R C, stands for Kernighan and Ritchie C. He coauthored The C Programming Language ands written a number of other books, most recently on the Go Programming Language. He also wrote a number of influential algorithms, as well as some other programming languages, including AMPL. His 1978 description of how to manage memory when working with those pesky strings we discussed earlier went on to give us the Hello World example we use for pretty much all introductions to programming languages today. He worked on ARPA projects at Stanford, helped with emacs, and now teaches computer science at Princeton, where he can help to shape the minds of future generations of programming languages and their creators. 
3/4/20228 minutes, 40 seconds
Episode Artwork

Banyan Vines and the Emerging Local Area Network

One of my first jobs out of college was ripping Banyan VINES out of a company and replacing it with LAN Manager. Banyan VINES was a network operating system for Unix systems. It came along in 1984. This was a time when minicomputers running Unix were running at most every University and when Unix offered far more features that the alternatives. Sharing files was as old as the Internet. Telnet was created in 1969. FTP came along in 1971. SMB in 1983. Networking computers together had evolved from just the ARPANET to local protocols like ALOHAnet, which inspired Bob Metcalfe to start work on the PARC Universal Packet protocol with David Boggs, which evolved into the Xerox Network Systems, or XNS, suite of networking protocols that were developed to network the Xerox Alto. Along the way the two of them co-invented Ethernet. But there were developments happening in various locations in silos. For example, TCP was more of an ARPANET then NSFNET project so wasn’t used for computers on their own networks to communicate yet. Data General was founded in 1968 when Edson de Castro, the project manager for the PDP-8 at Digital Equipment Corporation, grew frustrated that the PDP wasn’t evolving fast enough. He, Henry Burkhardt, and Richard Sogge of Digital would be joined by Herbert Richman, who did sales for Fairchild Semiconductor. They were proud of the PDP-8. It was a beautiful machine. But they wanted to go even further. And they didn’t feel like they could do so at Digital. A few computers later, Within a year, they shipped the next generation machine, which they called the Nova. They released more computers but then came the explosion of computers that was the personal computing market. Microcomputers showed up in offices around the world and on multiple desks. And it didn’t take long before people started wondering if it wouldn’t be faster to run a cable between computers than it was to save a file to a floppy and get on an elevator. By the 1970s, Data General had been writing software for customers, mostly for the rising tide of UNIX System V implementations. But just giving customers a TCP/IP stack or an application that could open a socket over an X.25 network, which was later replaced with Frame Relay networks run by phone systems and for legacy support on those X.25 was streamed over TCP/IP. Some of the people from those projects at Data General saw an opportunity to build a company that focused on a common need, moving files back and forth between the microcomputers that were also being connected to these networks. David Mahoney was a manager at Data General who saw what customers were asking for. And he saw an increasing under of those microcomputers needed a few common services to connect to. So he left to form Banyan Systems in 1983, bringing Anand Jagannathan and Larry Floryan with him. They built Banyan VINES (Virtual Integrated NEtwork Service) in 1984, releasing version 1. Their client software could run on DOS and connect to X.25, Token Ring (which IBM introduced in 1984), or the Ethernet networks Bob Metcalfe from Xerox and then 3Com was a proponent of. After all, much of their work resembled the Xerox Network Systems protocols, which Metcalfe had helped develop. They used a 32-bit address. They developed an Address Resolution Protocol (or ARP) and Routing Table Protocol (RTP) that used tables on a server. And they created a file services application, print services application, and directory service they called StreetTalk. To help, they brought in Jim Allchin, who eventually did much of the heavy lifting. It was similar enough to TCP/IP, but different. Yet as TCP/IP became the standard, they added that at a cost. The whole thing came in at $17,000 and ran on less bandwidth than other services, and so they won a few contracts with the US State Deparment, US Marine Corps, and other government agencies. Many embassies used 300 baud phone lines with older modems and the new VINES service allowed them to do file sharing, print sharing, and even instant messaging throughout the late 80s and early 90s. The Marine Corp used it during the Gulf War and in an early form of a buying tornado, they went public in 1992, raising $28 million through NASDAQ. They grew to 410 employees and peaked at around $75 million in sales, spread across 7000 customers. They’d grown through word of mouth and other companies with strong marketing and sales arms were waiting in the wings. Novel was founded in 1983 in Utah and they developed the IPX network protocol. Netware would eventually become one of the most dominant network operating systems for Windows 3 and then Windows 95 computers. Yet, with incumbents like Banyan VINES and Novel Netware, this is another one of those times when Microsoft saw an opening for something better and just willed it into existence. And the story is similar to that of dozens of other companies including Novell, Lotus, VisiCalc, Netscape, Digital Research, and the list goes on and on and on. This kept happening because of a number of reasons. The field of computing had been comprised of former academics, many of whom weren’t aggressive in business. Microsoft ended up owning the operating system and so had selling power when it came to cornering adjacent markets because they could provide the cleanest possible user experience. People seemed to underestimate Microsoft until it was too late. Inertia. Oh, and Microsoft could outspend on top talent and offer them the biggest impact for their work. Whatever the motivators, Microsoft won in nearly every nook and cranny in the IT field that they pursued for decades. The damaging part for Banyan was when they teamed up with IBM to ship LAN Manager, which ultimately shipped under the name of each company. Microsoft ended up recruiting Jim Allchin away and with network interface cards falling below $1,000 it became clear that the local area network was really just in its infancy. He inherited LAN Manager and then NT from Dave Cutler and the next thing we knew, Windows NT Server was born, complete with file services, print services, and a domain, which wasn’t a fully qualified domain name until the release of Active Directory. Microsoft added Windsock in 1993 and released their own protocols. They supported protocols like IPX/SPX and DECnet but slowly moved customers to their own protocols. Banyan released the last version of Banyan VINES, 7.0, in 1997. StreetTalk eventually became an NT to LDAP bridge before being cancelled in the end. The dot com bubble was firmly here, though, so all was not lost. They changed their name in 1999 to ePresence, shifting their focus to identity management and security, officially pulling out of the VINES market. But the dot com bubble burst, so they were acquired in 2003 by Unisys. There were other companies in different networking niches along the way. Phil Karn wrote KA9Q NOS to connect CP/M and then DOS to TCP/IP in 1985. He wrote it on a Xerox 820, but by then Xerox was putting Zilog chips in computers and running CP/M, seemingly with little of the flair the Alto could have had. But with KA9Q NOS any of the personal computers on the market could get on the Internet and that software helped host many a commercial dialup connection and would go on to be used for years in small embedded devices that needed IP connectivity. Those turned out to be markets overtaken by Banyan who was overtaken by Novel, who was overtaken by Microsoft when they added WinSock. There are a few things to take away from this journey. The first is that when IBM and Microsoft team up to develop a competing product, it’s time to pivot when there’s plenty of money left in the bank. The second is that there was an era of closed systems that was short lived when vendors wanted to increasingly embrace open standards. Open standards like TCP/IP. We also want to keep our most talented team in place. Jim Allchin was responsible for those initial Windows Server implementations. Then SQL Server. He was the kind of person who’s a game changer on a team. We also don’t want to pivot to the new hotness because it’s the new hotness. Customers pay vendors to solve problems. Putting an e in front of the name of a company seemed really cool in 1998. But surveying customers and thinking more deeply about problems they face - that’s where magic can happen. Provided we have the right talent to make it happen.
2/27/202213 minutes, 1 second
Episode Artwork

The Nature and Causes of the Cold War

Our last episode was on Project MAC, a Cold War-era project sponsored by ARPA. That led to many questions like what led to the Cold War and just what was the Cold War. We'll dig into that today. The Cold War was a period between 1946, in the days after World War II, and 1991, when the United States and western allies were engaged in a technical time of peace that was actually an aggressive time of arms buildup and proxy wars. Technology often moves quickly when nations or empires are at war. In many ways, the Cold War gave us the very thought of interactive computing and networking, so is responsible for the acceleration towards our modern digital lives. And while I’ve never seen it references as such, this was more of a continuation of wars between the former British empire and the Imperialistic Russian empires. These make up two or the three largest empires the world has ever seen and a rare pair of empires that were active at the same time.  And the third, well, we’ll get to the Mongols in this story as well. These were larger than the Greeks, the Romans, the Persians, or any of the Chinese dynasties. In fact, the British Empire that reached its peak in 1920 was 7 times larger than the land controlled by the Romans, clocking in at 13.7 million square miles. The Russian Empire was 8.8 million square miles. Combined the two held nearly half the world. And their legacies live on in trade empires, in some cases run by the same families that helped fun the previous expansions.  But the Russians and British were on a collision course going back to a time when their roots were not as different as one might think. They were both known to the Romans. But yet they both became feudal powers with lineages of rulers going back to Vikings. We know the Romans battled the Celts, but they also knew of a place that Ptolemy called Sarmatia Europea in around 150AD, where a man named Rurik settle far later. He was a Varangian prince, which is the name Romans gave to Vikings from the area we now call Sweden. The 9th to 11th century saw a number o these warrior chiefs flow down rivers throughout the Baltics and modern Russia in search of riches from the dwindling Roman vestiges of empire. Some returned home to Sweden; others conquered and settled. They rowed down the rivers: the Volga, the Volkhov, the Dvina, and the networks of rivers that flow between one another, all the way down the Dnieper river, through the Slavic tripes Ptolemy described which by then had developed into city-states, such as Kiev, past the Romanians and Bulgers and to the second Rome, or Constantinople.  The Viking ships rowed down these rivers. They pillaged, conquered, and sometimes settled. The term for rowers was Rus. Some Viking chiefs set up their own city-states in and around the lands. Some when their lands back home were taken while they were off on long campaigns. Charlemagne conquered modern day France and much of Germany, from The Atlantic all the way down into the Italian peninsula, north into Jutland, and east to the border with the Slavic tribes. He weakened many, upsetting the balance of power in the area. Or perhaps there was never a balance of power.  Empires such as the Scythians and Sarmatians and various Turkic or Iranian powers had come and gone and each in their wake crossing the vast and harsh lands found only what Homer said of the area all the way back in the 8th century BCE, that the land was deprived of sunshine. The Romans never pushed up so far into the interior of the steppes as the were busy with more fertile farming grounds. But as the Roman Empire fell and the Byzantines flourished, the Vikings traded with them and even took their turn trying to loot Constantinople. And Frankish Paris. And again, settled in the Slavic lands, marrying into cultures and DNA.  The Rus Rome retreated from lands as her generals were defeated. The Merovingian dynasty rose in the 5th century with the defeat of Syagrius, the last Roman general Gaul and lasted until a family of advisors slowly took control of running the country, transitioning to the Carolingian Empire, of which Charlemagne, the Holy Roman Emperor, as he was crowned, was the most famous. He conquered and grew the empire.  Charlemagne knew the empire had outgrown what one person could rule with the technology of the era, so it was split into three, which his son passed to his grandsons. And so the Carolingian empire had made the Eastern Slavs into tributaries of the Franks. There were hostilities but by the Treaty of Mersen in 870 the split of the empire generally looked like the borders of northern Italy, France, and Germany - although Germany also included Austria but not yet Bohemia. It split and re-merged and smaller boundary changes happened but that left the Slavs aware of these larger empires. The Slavic peoples grew and mixed with people from the Steppes and Vikings. The Viking chiefs were always looking for new extensions to their trade networks. Trade was good. Looting was good. Looting and getting trade concessions to stop looting those already looted was better. The networks grew. One of those Vikings was Rurik. Possibly Danish Rorik, a well documented ally who tended to play all sides of the Carolingians and a well respected raider and military mind.  Rurik was brought in as the first Viking, or rower, or Rus, ruler of the important trade city that would be known as New City, or Novgorod. Humans had settled in Kiev since the Stone Age and then by Polans before another prince Kyi took over and then Rurik’s successor Oleg took Smolensk and Lyubech. Oleg extended the land of Rus down the trading routes, and conquered Kiev. Now, they had a larger capital and were the Kievan Rus.  Rurik’s son Igor took over after Oleg and centralized power in Kiev. He took tribute from Constantinople after he attacked, plunder Arab lands off the Caspian Sea, and was killed overtaxing vassal states in his territory. His son Sviatoslav the Brave then conquered the Alans and through other raiding helped cause the collapse of the Kazaria and Bulgarian empires. They expanded throughout the Volga River valley, then to the Balkans, and up the Pontic Steppe, and quickly became the largest empire in Europe of the day. His son Vladimir the Great expanded again, with he empire extending from the Baltics to Belarus to the Baltics and converted to Christianity, thus Christianizing the lands he ruled.  He began marrying and integrating into the Christian monarchies, which his son continued. Yaroslov the Wise married the daughter of the King of Sweden who gave him the area around modern-day Leningrad. He then captured Estonia in 1030, and as with others in the Rurikid dynasty as they were now known, made treaties with others and then  pillaged more Byzantine treasures. He married one daughter to the King of Norway, another to the King of Hungary, another to the King of the Franks, and another to Edward the Exile of England, and thus was the grandfather of Edgar the Aetheling, who later became a king of England.  The Mongols The next couple of centuries saw the rise of Feudalism and the descendants of Rurik fight amongst each other. The various principalities were, as with much of Europe during the Middle Ages, semi-independent duchies, similar to city-states. Kiev became one of the many and around the mid 1100s Yaroslav the Wise’s great-grandson, Yuri Dolgoruki built a number of new villages and principalities, including one along the Moskva river they called Moscow. They built a keep there, which the Rus called kremlins.  The walls of those keeps didn’t keep the Mongols out. They arrived in 1237. They moved the capital to Moscow and Yaroslav II, Yuri’s grandson, was poisoned in the court of Ghengis Khan’s grandson Batu. The Mongols ruled, sometimes through the descendants of Rurik, sometimes disposing of them and picking a new one, for 200 years. This is known as the time of the “Mongol yoke.”  One of those princes the Mongols let rule was Ivan I of Moscow, who helped them put down a revolt in a rival area in the 1300s. The Mongols trusted Moscow after that, and so we see a migration of rulers of the land up into Moscow. The Golden Horde, like the Viking  Danes and Swedes settled in some lands. Kublai Khan made himself ruler of China. Khanates splintered off to form the ruling factions of weaker lands, such as modern India and Iran - who were once the cradle of civilization. Those became the Mughals dynasties as they Muslimized and moved south. And so the Golden Horde became the Great Horde. Ivan the Great expanded the Muscovite sphere of influence, taking Novgorod, Rostov, Tver, Vyatka, and up into the land of the Finns. They were finally strong enough to stand up to the Tatars as they called their Mongol overlords and made a Great Stand on the Ugra River. And summoning a great army simply frightened the Mongol Tatars off. Turns out they were going through their own power struggles between princes of their realm and Akhmed was assassinated the next year, with his successor becoming Sheikh instead of Khan. Ivan’s grandson, Ivan the Terrible expanded the country even further. He made deals with various Khans and then conquered others, pushing east to conquer the Khanate of Sibiu and so conquered Siberia in the 1580s. The empire then stretched all the way to the Pacific Ocean.  He had a son who didn’t have any heirs and so was the last in the Rurikid dynasty. But Ivan the Terrible had married Anastasia Romanov, who when he crowned himself Caesar, or Tsar as they called it, made her Tsaritsa. And so the Romanov’s came to power in 1596 and following the rule of Peter the Great from 1672 to 1725, brought the Enlightenment to Russia. He started the process of industrialization, built a new capital he called St Petersburg, built a navy, made peace with the Polish king, then Ottoman king, and so took control of the Baltics, where the Swedes had taken control of on and off since the time of Rurik.  Russian Empire Thus began the expansion as the Russian Empire. They used an alliance with Denmark-Norway and chased the Swedes through the Polish-Lithuanian Commonwealth, unseating the Polish king along the way. He probably should not have allied with them. They moved back into Finland, took the Baltics so modern Latvia and Estonia, and pushed all the way across the Eurasian content across the frozen tundra and into Alaska.  Catherine the Great took power in 1762 and ignited a golden age. She took Belarus, parts of Mongolia, parts of modern day Georgia, overtook the Crimean Khanate, and modern day Azerbaijan. and during her reign founded Odessa, Sevastopol and other cities. She modernized the country like Peter and oversaw nearly constant rebellions in the empire. And her three or four children went on to fill the courts of Britain, Denmark, Sweden, Spain, and the Netherlands. She set up a national network of schools, with teachings from Russian and western philosophers like John Locke. She collected vast amounts of art, including many from China. She set up a banking system and issued paper money. She also started the process to bring about the end of serfdom. Even though between her and the country she owned 3.3 million herself.  She planned on invading the Khanate of Persia, but passed away before her army got there. Her son Paul halted expansion. And probably just in time. Her grandson Alexander I supported other imperial powers against Napoleon and so had to deal with the biggest invasion Russia had seen. Napoleon moved in with his grand army of half a million troops. The Russians used a tactic that Peter the Great used and mostly refused to engage Napoleon’s troops instead burning the supply lines. Napoleon lost 300,000 troops during that campaign. Soon after the Napoleanic wars ended, the railways began to appear. The country was industrializing and with guns and cannons, growing stronger than ever.  The Opium Wars, between China and the UK then the UK and France were not good to China. Even though Russia didn’t really help they needed up with a piece of the Chinese empire and so in the last half of the 1800s the Russian Empire grew by another 300,000 square miles on the backs of a series of unequal treaties as they came to be known in China following World War I.  And so by 1895, the Romanovs had expanded past their native Moscow, driven back the Mongols, followed some of the former Mongol Khanates to their lands and taken them, took Siberia, parts of the Chinese empire, the Baltics, Alaska, and were sitting on the third largest empire the world had ever seen, which covered nearly 17 percent of the world. Some 8.8 million square miles. And yet, still just a little smaller than the British empire. They had small skirmishes with the British but by and large looked to smaller foes or proxy wars, with the exception of the Crimean War.  Revolution The population was expanding and industrializing. Workers flocked to factories on those train lines. And more people in more concentrated urban areas meant more ideas. Rurik came in 862 and his descendants ruled until the Romanovs took power in 1613. They ruled until 1917. That’s over 1,000 years of kings, queens, Tsars, and Emperors. The ideas of Marx slowly spread. While the ruling family was busy with treaties and wars and empire, they forgot to pay attention to the wars at home.  People like Vladimir Lenin discovered books by people like Karl Marx. Revolution was in the air around the world. France had shown monarchies could be toppled. Some of the revolutionaries were killed, others put to work in labor camps, others exiled, and still others continued on. Still, the empire was caught up in global empire intrigues. The German empire had been growing and the Russians had the Ottomans and Bulgarians on their southern boarders. They allied with France to take Germany, just as they’d allied with Germany to take down Poland. And so after over 1.8 million dead Russians and another 3.2 million wounded or captured and food shortages back home and in the trenches, the people finally had enough of their Tsar. They went on strike but Tsar Nicholas ordered the troops to fire. The troops refused. The Duma stepped in and forced Nicholas to abdicate. Russia had revolted in 1917, sued Germany for peace, and gave up more territory than they wanted in the process. Finland, the Baltics, their share of Poland, parts of the Ukraine. It was too much. But the Germans took a lot of time and focus to occupy and so it helped to weaken them in the overall war effort.  Back home, Lenin took a train home and his Bolshevik party took control of the country. After the war Poland was again independent. Yugoslavia, Czechoslovakia, Estonia, Lithuania, Latvia, and the Serbs became independent nations. In the wake of the war the Ottoman Empire was toppled and modern Turkey was born. The German Kaiser abdicated. And socialism and communism were on the rise. In some cases, that was really just a new way to refer to a dictator that pretended to care about the people. Revolution had come to China in 1911 and Mao took power in the 1940s.  Meanwhile, Lenin passed in 1924 and Rykov, then Molotov, who helped spur a new wave of industrialization. Then Stalin, who led purges of the Russian people in a number of Show Trials before getting the Soviet Union, as Russian Empire was now called, into World War II. Stalin encouraged Hitler to attack Poland in 1939. Let’s sit on that for a second. He tried to build a pact with the Western powers and after that broke down, he launched excursions annexing parts of Poland, Finland, Romania, Lithuania, Estonia, Latvia. Many of the lands were parts of the former Russian Empire. The USSR had chunks of Belarus and the Ukraine before but as of the 1950s annexed Poland, Easter Germany, Czechoslovakia, Romania, and Bulgaria as part of the Warsaw Pact, a block of nations we later called the Soviet Bloc. They even built a wall between East and West Germany. During and after the war, the Americans whisked German scientists off to the United States. The Soviets were in no real danger from an invasion by the US and the weakened French, Austrians, and military-less Germans were in no place to attack the Soviets. The UK had to rebuild and British empire quickly fell apart. Even the traditional homes of the vikings who’d rowed down the rivers would cease to become global powers. And thus there were two superpowers remaining in the world, the Soviets and the United States.  The Cold War The Soviets took back much of the former Russian Empire, claiming they needed buffer zones or through subterfuge. At its peak, the Soviet Union cover 8.6 million square miles; just a couple hundred thousand shy of the Russian Empire. On the way there, they grew to a nation of over 290 million people with dozens of nationalities. And they expanded the sphere of influence even further, waging proxy wars in places like Vietnam and Korea. They never actually went to war with the United States, in much the same way they mostly avoided the direct big war with the Mongols and the British - and how Rorik of Dorestad played both sides of Frankish conflicts. We now call this period the Cold War. The Cold War was an arms race. This manifested itself first in nuclear weapons. The US is still the only country to detonate a nuclear weapon in war time, from the bombings that caused the surrender of Japan at the end of the war. The Soviets weren’t that far behind and detonated a bomb in 1949. That was the same year NATO was founded as a treaty organization between Belgium, Canada, Denmark, France, Iceland, Italy, Luxembourg, the Netherlands, Norway, Portugal, and the United States.  The US upped the ante with the hydrogen bomb in 1952. The Soviets got the hydrogen bomb in 1955. And then came the Space Race. Sputnik launched in 1957. The Russians were winning the space race. They further proved that when they put Yuri Gagarin up in 1961. By 1969 the US put Neil Armstrong and Buzz Aldrin on the moon. Each side developed military coalitions, provided economic aid to allies, built large arsenals of weapons, practiced espionage against one another, deployed massive amounts of propaganda, and spreading their ideology. Or at least that’s what the modern interpretation of history tells us. There were certainly ideological differences, but the Cold War saw the spread of communism as a replacement for conquest. That started with Lenin trying to lead a revolt throughout Europe but shifted over the decades into again, pure conquest.  Truman saw the rapid expansion of the Soviets and without context that they were mostly reclaiming lands conquered by the Russian imperial forces, won support for the Truman Doctrine. There, he contained Soviet expansion in Eastern Europe. First, they supported Greece and Turkey. But the support extended throughout areas adjacent to Soviet interests. Eisenhower saw how swiftly Russians were putting science in action with satellites and space missions and nuclear weapons - and responded with an emphasis in American science.  The post-war advancements in computing were vast in the US. The industry moved from tubes and punch cards to interactive computing after the Whirlwind computer was developed at MIT first to help train pilots and then to intercept soviet nuclear weapons. Packet switching, and so the foundations of the Internet were laid to build a computer network that could withstand nuclear attack. Graphical interfaces got their start when Ivan Sutherland was working at MIT on the grandchild of Whirlwind, the TX-2 - which would evolve into the Digital Equipment PDP once privatized. Drum memory, which became the foundation of storage was developed to help break Russian codes and intercept messages. There isn’t a part of the computing industry that isn’t touched by the research farmed out by various branches of the military and by ARPA.   Before the Cold War, Russia and then the Soviet Union were about half for and half against various countries when it came to proxy wars. They tended to play both sides. After the Cold War it was pretty much always the US or UK vs the Soviet Union. Algeria, Kenya, Taiwan, the Sudan, Lebanon, Central America, the Congo, Eritrea, Yemen, Dhofar, Algeria, Malaysia, the Dominican Republic, Chad, Iran, Iraq, Thailand, Bolivia, South Africa, Nigeria, India, Bangladesh, Angolia, Ethiopia, the Sahara, Indonesia, Somalia, Mozambique, Libya, and Sri Lanka. And the big ones were Korea, Vietnam, and Afghanistan. Many of these are still raging on today.  The Soviet empire grew to over 5 million soldiers. The US started with 2 nuclear weapons in 1945 and had nearly 300 by 1950 when the Soviets had just 5. The US stockpile grew to over 18,000 in 1960 and peaked at over 31,000 in 1965. The Soviets had 6,129 by then but kept building until they got close to 40,000 by 1980. By then the Chinese, France, and the UK each had over 200 and India and Israel had developed nuclear weapons. Since then only Pakistan and North Korea have added warheads, although there are US warheads located in Germany, Belgium, Italy, Turkey, and the Netherlands.  Modern Russia The buildup was expensive. Research, development, feeding troops, supporting asymmetrical warfare in proxy states, and trade sanctions put a strain on the government and nearly bankrupted Russia. They fell behind in science, after Stalin had been anti-computers. Meanwhile, the US was able to parlay all that research spending into true productivity gains. The venture capital system also fueled increasingly wealthy companies who paid taxes. Banking, supply chains, refrigeration, miniaturization, radio, television, and everywhere else we could think of. By the 1980s, the US had Apple and Microsoft and Commodore. The Russians were trading blat, or an informal black market currency, to gain access to knock-offs of ZX Spectrums when the graphical interfaces systems were born. The system of government in the Soviet Union had become outdated. There were some who had thought to modernize it into more of a technocracy in an era when the US was just starting to build ARPANET - but those ideas never came to fruition. Instead it became almost feudalistic with high-ranking party members replacing the boyars, or aristocrats of the old Kievan Rus days. The standard of living suffered. So many cultures and tribes under one roof, but only the Slavs had much say.  As the empire over-extended there were food shortages. If there are independent companies then the finger can be pointed in their direction but when food is rationed by the Politburo then the decline in agricultural production became dependent on bringing food in from the outside. That meant paying for it. Pair that with uneven distribution and overspending on the military.  The Marxist-Leninist doctrine had been a one party state. The Communist Party. Michael Gorbachev allowed countries in the Bloc to move into a democratic direction with multiple parties. The Soviet Union simply became unmanageable. And while Gorbachev took the blame for much of the downfall of the empire, there was already a deep decay - they were an oligarchy pretending to be a communist state. The countries outside of Russia quickly voted in non-communist governments and by 1989 the Berlin Wall came down and the Eastern European countries began to seek independence, most moving towards democratic governments.  The collapse of the Soviet Union resulted in 15 separate countries and left the United States standing alone as the global superpower. The Czech Republic, Hungary, and Poland joined NATO in 1999. 2004 saw Bulgaria, Estonia, Latvia, Lithuania, Romania, Slovakia, and Slovenia join. 2009 brought in Albania and Croatia. 2017 led to Montenegro and then North Macedonia. Then came the subject of adding Ukraine. The country that the Kievan Rus had migrated throughout the lands from. The stem from which the name  and possibly soul of the country had sprouted from. How could Vladimir Putin allow that to happen? Why would it come up? As the Soviets pulled out of the Bloc countries , they left remnants of their empire behind. Belarus, Kazakstan, and the Ukraine were left plenty of weapons that couldn’t be moved quickly. Ukraine alone had 1,700 nuclear weapons, which included 16 intercontinental ballistic missiles. Add to that nearly 2,000 biological and chemical weapons. Those went to Russia or were disassembled once the Ukrainians were assured of their sovereignty. The Crimea, which had been fought over in multiple bloody wars was added to Ukraine. At least until 2014, when Putin wanted the port of Sevastopol, founded by Catherine the Great. Now there was a gateway from Russia to the Mediterranean yet again. So Kievan Rus under Rurik is really the modern Ukraine and the Russian Empire then Romanov Dynasty flowed from that following the Mongol invasions. The Russian Empire freed other nations from the yolk of Mongolian rule but became something entirely different once they over-extended. Those countries in the empire often traded the Mongol yolk for the Soviet yolk. And entirely different from the Soviet Union that fought the Cold War and the modern Russia we know today.  Meanwhile, the states of Europe had been profoundly changed since the days of Thomas Paine’s The Rights of Man and Marx. Many moved left of center and became socialized parts of their economy. No one ever need go hungry in a Scandanavian country. Health care, education, even child care became free in many countries. Many of those same ideals that helped lift the standard of living for all in developed countries then spread, including in Canada and some in the US. And so we see socialism to capitalism as more of a spectrum than a boolean choice now. And totalitarianism, oligarchy, and democracy as a spectrum as well. Many could argue reforms in democratic countries are paid for by lobbyists who are paid for by companies and thus an effective oligarchy. Others might argue the elections in many countries are rigged and so they aren’t even oligarchs, they’re monarchies.  Putin took office in 1999 and while Dmitry Medvedev was the president for a time, but he effectively ruled in a tandemocracy with Putin until Putin decided to get back in power. That’s 23 years and counting and just a few months behind when King Abdullah took over in Jordan and King Mohammed VI took over in Morocco. And so while democratic in name, they’re not all quite so democratic. Yet they do benefit from technology that began in Western countries and spread throughout the world. Countries like semi-conductor manufacturer Sitronics even went public on the London stock exchange. Hard line communists might (and do) counter that the US has an empire and that western countries conspire for the downfall of Russia or want to turn Russians into slaves to the capitalist machine. As mentioned earlier, there has always been plenty of propaganda in this relationship. Or gaslighting. Or fake news. Or disinformation.  One of those American advancements that ties the Russians to the capitalist yoke is interactive computing. That could have been developed in Glushkov’s or Kitov’s labs in Russia, as they had the ideas and talent. But because the oligarchy that formed around communism, the ideas were sidelined and it came out of MIT - and that led to Project MAC, which did as much to democratize computing as Gorbachev did to democratize the Russian Federation.
2/18/202245 minutes, 53 seconds
Episode Artwork

Project MAC and Multics

Welcome to the history of computing podcast. Today we’re going to cover a cold war-era project called Project MAC that bridged MIT with GE and Bell Labs. The Russians beat the US to space when they launched Sputnik in 1958. Many in the US felt the nation was falling behind and so later that year president Dwight D. Eisenhower appointed then president of MIT James Killian as the Presidential Assistant for Science and created ARPA. The office was lean and funded a few projects without much oversight. One was Project MAC at MIT, which helped cement the university as one of the top in the field of computing as it grew. Project MAC, short for Project on Mathematics and Computation, was a 1960s collaborative endeavor to develop a workable timesharing system. The concept of timesharing initially emerged during the late 1950s. Scientists and Researchers finally went beyond batch processing with Whirlwind and its spiritual predecessors, the TX-0 through TX-2 computers at MIT. We had computer memory now and so had interactive computing. That meant we could explore different ways to connect directly with the machine. In 1959, British mathematician Christopher Strachey presented the first public presentation on timesharing at a UNESCO meeting, and John McCarthy distributed an internal letter regarding timesharing at MIT. Timesharing was initially demonstrated at the MIT Computational Center in November 1961, under the supervision of Fernando Corbato, an MIT professor. J.C.R. Licklider at ARPA had been involved with MIT for most of his career in one way or another and helped provide vision and funding along with contacts and guidance, including getting the team to work with Bolt, Beranek & Newman (BBN). Yuri Alekseyevich Gagarin went to space in 1961. The Russians were still lapping us. Money. Governments spend money. Let’s do that. Licklider assisted in the development of Project MAC, machine-assisted cognition, led by Professor Robert M. Fano. He then funded the project with $3 million per year. That would become the most prominent initiative in timesharing. In 1967, the Information Processing Techniques Office invested more than $12 million in over a dozen timesharing programs at colleges and research institutions. Timesharing then enabled the development of new software and hardware separate from that used for batch processing. Thus, one of the most important innovations to come out of the project was an operating system capable of supporting multiple parallel users - all of whom could have complete control of the machine. The operating system they created would be known as Multics, short for Multiplexed Information and Computing Service. It was created for a GE 645 computer but modular in nature and could be ported to other computers. The project was a collaborative effort between MIT, GE, and Bell Labs. Multics was the first time we really split files away from objects read in memory and wrote them into memory for processing then back to disk. They developed the concepts of dynamic linking, daemons, procedural calls, hierarchical file systems, process stacks, a split between user land and the system, and much more. By the end of six months after Project MAC was created, 200 users in 10 different MIT departments had secured access to the system. The Project MAC laboratory was apart from its former Department of Electrical Engineering by 1967 and evolved into its interdepartmental laboratory. Multics progressed from computer timesharing to a networked computer system, integrating file sharing and administration capabilities and security mechanisms into its architecture. The sophisticated design, which could serve 300 daily active users on 1,000 MIT terminal computers within a couple more years, inspired engineers Ken Thompson and Dennis Ritchie to create their own at Bell Labs, which evolved into the C programming language and the Unix operating system. See, all the stakeholders with all the things they wanted in the operating system had built something slow and fragile. Solo developers don’t tend to build amazing systems, but neither do large intracompany bureaucracies. GE never did commercialize Multics because they ended their computer hardware business in 1970. Bell Labs dropped out of the project as well. So Honeywell acquired the General Electric computer division and so rights to the Multics project. In addition, Honeywell possessed several other operating systems, each supported by its internal organizations. In 1976, Project MAC was renamed the Laboratory for Computer Science (LCS) at MIT, broadening its scope. Michael L. Dertouzos, the lab's director, advocated developing intelligent computer programs. To increase computer use, the laboratory analyzed how to construct cost-effective, user-friendly systems and the theoretical underpinnings of computer science to recognize space and time constraints. Some of their project ran for decades afterwards. In 2000, several Multics sites were shut down. The concept of buying corporate “computer utilities” was a large area of research in the late 60s to 70s. Scientists bought time on computers that universities purchased. Companies did the same. The pace of research at both increased dramatically. Companies like Tymeshare and IBM made money selling time or processing credits, and then after an anti-trust case, IBM handed that business over to Control Data Corporation, who developed training centers to teach people how to lease time. These helped prepare a generation of programmers when the microcomputers came along, often taking people who had spent their whole careers on CDC Cybers or Burroughs mainframes by surprise. That seems to happen with the rapid changes in computing. But it was good to those who invested in the concept early. And the lessons learned about scalable architectures were skills that transitioned nicely into a microcomputer world. In fact, many environments still run on applications built in this era. The Laboratory for Computer Science (LCS) accomplished other ground-breaking work, including playing a critical role in advancing the Internet. It was often larger but less opulent than the AI lab at MIT. And their role in developing applications that would facilitate online processing and evaluation across various academic fields, such as engineering, medical, and library sciences led to advances in each. In 2004, LCS merged with MIT's AI laboratory to establish the Computer Science and Artificial Intelligence Laboratory (CSAIL), one of the flagship research labs at MIT. And in the meantime countless computer scientists who contributed at every level of the field flowed through MIT - some because of the name made in those early days. And the royalties from patents have certainly helped the universities endowment. The Cold War thawed. The US reduced ARPA spending after the Mansfield Amendment was passed in 1969. The MIT hackers flowed out to the world, changing not only how people thought of automating business processes, but how they thought of work and collaboration. And those hackers were happy to circumvent all the security precautions put on Multics, and so cultural movements evolved from there. And the legacy of Multics lived on in Unix, which evolved to influence Linux and is in some way now a part of iOS, Mac OS, Android, and Chrome OS.
2/15/202211 minutes, 31 seconds
Episode Artwork

Dell: From A Dorm Room to a Board Room

Dell is one of the largest technology companies in the world, and it all started with a small startup that sold personal computers out of Michael Dell's dorm room at the University of Texas. From there, Dell grew into a multi-billion dollar company, bought and sold other companies, went public, and now manufactures a wide range of electronics including laptops, desktops, servers, and more.  After graduating high school, Michael Dell enrolled at the University of Texas at Austin with the idea that he would some day start his own company. Maybe even in computers. He had an Apple II in school and Apple and other companies had done pretty well by then in the new microcomputer space. He took it apart and these computers were just a few parts that were quickly becoming standardized. Parts that could be bought off the shelf at computer stores. So he opened a little business that he ran out of his dorm room fixing computers and selling little upgrades. Many a student around the world still does the exact same thing. He also started buying up parts and building new computers. Texas Instruments was right up the road in Dallas. And there was a price war in the early 80s between Commodore and Texas Instruments. Computers could be big business. And it seemed clear that this IBM PC that was introduced in 1981 was going to be more of a thing, especially in offices. Especially since there were several companies making clones of the PC, including Compaq who was all over the news as Silicon Cowboys, having gotten to $100 million in sales within just two years.  So from his dorm room in 1984, Dell started a little computer company he called PCs Limited. He built PCs using parts and experimented with different combinations. One customer led to another and he realized that a company like IBM bought a few hundred dollars worth of parts, put them in a big case and sold it for thousands of dollars. Any time a company makes too much margin, smaller and more disruptive companies will take the market away. Small orders turned into bigger and ones and he was able to parlay each into being able to build bigger orders.  They released the Turbo PC in 1985. A case, a mother board, a CPU, a keyboard, a mouse, some memory, and a CPU chip. Those first computers he built came with an 8088 chip. Low overhead meant he could be competitive on price: $795. No retail store front and no dealers, who often took 25 to 50 percent of the money spent on computers, let the company run out of a condo. He’d sold newspapers as a kid so he was comfortable picking up the phone and dialing for dollars. He managed to make $200,000 in sales in that first year. So he dropped out of school to build the company.  To keep costs low, he sold through direct mail and over the phone. No high-paid sellers in blue suits like IBM, even if the computers could run the same versions of DOS. He incorporated as Dell Computer Company in 1987, started to expand internationally, and on the back of rapid revenue growth and good margins. They hit $159 million in sales that year. So they took the company public in 1988. The market capitalization when they went public was $30 million and quickly rose to $80 million. By then we’d moved past the 8088 chips and the industry was standardizing on the 80386 chip, following the IBM PS/2. By the end of 1989 sales hit $250 million.  They needed more Research and Development firepower, so they brought in Glenn Henry. He’d been at IBM for over 20 years and managed multiple generations of mid-range mainframes then servers and then RISC-based personal computers. He helped grow the R&D team into the hundreds and quality of computer went up, which paired well with costs of computers remaining affordable compared to the rest of the market.  Dell was, and to a large degree still is, a direct to consumer company. They experimented with the channel in the early 1990s, which is to say 3rd parties that were authorized to sell their computers. They signed deals to sell through distributors, computer stores, warehouse clubs, and retail chains. But the margins didn’t work, so within just a few years they cancelled many of those relationships. Instead they went from selling to companies to the adjacent home market.  It seems like that’s the last time in recent memory that direct mailing as a massive campaign worked. Dell was able to undercut most other companies who sold laptops at the time by going direct to consumers. They brought in marketing execs from other companies, like Tandy. The London office was a huge success, bringing in tens of millions in revenue, so they brought on a Munich office and then slowly expanded into tother countries. They were one of the best sales and marketing machines in that direct to consumer and business market. Customers could customize orders, so maybe add a faster CPU, some extra memory, or even a scanner, modem, or other peripheral. They got the manufacturing to the point where they could turn computers around in five days. Just a decade earlier people waited months for computers. They released their first laptop in 1989, which they called the 316LT. Just a few years earlier, Michael Dell was in a dorm room. If he’d completed a pre-med degree and gotten into medical school, he’d likely be in his first or second year. He was now a millionaire; and just getting started. With the help of their new R&D chief, they were able to get into the server market where the margins were higher, and that helped get more corporate customers. By the end of 1990, they were the sixth largest personal computer company in the US. To help sales in the rapidly growing European and Middle Eastern offices, they opened another manufacturing location in Ireland. And by 1992, they became a one of the top 500 companies in the world. Michael Dell, instead of being on an internship in medical school and staring down the barrel of school loans, was the youngest CEO in the Fortune 500. The story is almost boring. They just grow and grow. Especially when rivals like IBM, HP, Digital Equipment, and Compaq make questionable finance and management choices that don’t allow those companies to remain competitive. They all had better technology at many times, but none managed to capitalize on the markets. Instead of becoming the best computer maker they could be, they played corporate development games and wandered away from their core businesses. Or like IBM they decided that they didn’t want to compete with the likes of Dell and just sold off their PC line to Lenovo. But Dell didn’t make crappy computers.  They weren’t physically inspiring like some computers at the time, but they got the job done and offices that needed dozens or hundreds of machines often liked working with Dell. They continued the global expansion through the 90s and added servers in 1996. By now there were customers buying their second or third generation of computer, going from DOS to Windows 3.1 to Windows 95. And they did something else really important in 1996: they began to sell through the web at dell.com. Within a few months they were doing a million a day in sales and the next year hit 10 million PCs sold.  Little Dell magazines showed up in offices around the world. Web banners appeared on web pages. Revenues responded and went from $2.9 billion in 1994 to $3.5 billion in 1995. And they were running at margins over 20 percent. Revenue hit $5.3 billion in 1996, 7.8 in 1997, 12.3 in 1998, 18.2 in 1999, and $25.3 in 2000. The 1990s had been good to Dell. Their stock split 7 times. It wouldn’t double every other year again, but would double again by 2009. In the meantime, the market was changing. The Dell OptiPlex is one of the best selling lines of computers of all time and offers a glimpse into what was changing. Keep in mind, this was the corporate enterprise machine. Home machines can be better or less, according to the vendor. The processors ranged from a Celeron up to a Pentium i9 at this point.  Again, we needed a mother board, usually an ATX or a derivative. They started with that standard ATX mother board form factor but later grew to be a line that came in the tower, the micro, and everything in between. Including an All-in-one. That Series 1 was beige and just the right size to put a big CRT monitor on top of it. It sported a 100 MHz 486 chip and could take up to 64 megabytes of memory across a pair of SIMM slots. The Series 2 was about half the size and by now we saw those small early LCD flat panel screens. They were still beige though. As computers went from beige to black with the Series 3 we started to see the iconic metallic accents we’re accustomed to now. They followed along the Intel replacement for the ATX motherboard, the BTX, and we saw those early PCI form factors be traded for PCIe. By the end of the Series 3 in 2010, the Optiplex 780 could have up to 16 gigs of memory as a max, although that would set someone back a pretty penning in 2009. And the processors came ranging from the 800 MHz to 1.2 GHz. We’d also gone from PS/2 ports with serial and parallel  to USB 2 ports and from SIMM to DIMM slots, up to DDR4 with the memory about as fast as a CPU.  But they went back to the ATX and newer Micro ATX with the Series 4. They embraced the Intel i series chips and we got all the fun little metal designs on the cases. Cases that slowly shifted to being made of recycled parts. The Latitude laptops followed a similar pattern. Bigger faster, and heavier. They released the Dell Dimension and acquired Alienware in 2006, at the time the darling of the gamer market. Higher margin hardware, like screaming fast GPU graphic cards. But also lower R&D costs for the Dell lines as there was the higher end line that flowed down to the OptiPlex then Dimension. Meanwhile, there was this resurgent Apple. They’d released the iMac in 1998 and helped change the design language for computers everywhere. Not that everyone needed clear cases. Then came the iPod in 2001. Beautiful design could sell products at higher prices. But they needed to pay a little more attention to detail. But more importantly, those Dells were getting bigger and faster and heavier while the Apple computers were getting lighter, and even the desktops more portable. The iPhone came in 2007. The Intel MacBook Air came 10 years after that iMac, in 2008. The entire PC industry was in a race for bigger power supplies to push more and more gigahertz through a CPU without setting the house on fire and Apple changed the game. The iPad was released in 2010. Apple finally delivered on the promise of the Dynabook that began life at Xerox PARC. Dell had been in the drivers seat. They became the top personal computer company in 2003 and held that spot until HP and Compaq merged. But their spot would never be regained as revenue slowed from the time the iPad was released for almost a decade, even contracting at times. See, Dell had a close partnership with Intel and Microsoft. Microsoft made operating systems for mobile devices but the Dell Venue was not competitive with the iPhone. They also tried making a mobile device using Android but the Streak never sold well either and was discontinued as well.  While Microsoft retooled their mobile platforms to compete in the tablet space, Dell tried selling Android tablets but discontinued those in 2016. To make matters worse for Dell, they’d ridden a Microsoft Windows alliance where they never really had to compete with Microsoft for nearly 30 years and then Microsoft released the Surface in 2012. The operating systems hadn’t been pushing people to upgrade their computers and Microsoft even started selling Office directly and online, so Dell lost revenue bundling Office with computers.  They too had taken their eye off the market. HP bought EDS in 2008, diversifying into a services organization, something IBM had done well over a decade before. Except rather than sell their PC business they made a go at both. So Dell did the same, acquiring Perot Systems, the company Perot started after he sold EDS and ran for president, for $3.9 billion, which came in at a solid $10 billion less than what HP paid for EDS.  The US was in the midst of a recession, so that didn’t help matters either. But it did make for an interesting investment climate. Interest rates were down, so large investors needed to put money to work to show good returns for customers. Dell had acquired just 8 companies before the Great Recession but acquired an average of 5 over each of the next four years. This allowed them to diversify, And Michael Dell made another savvy finance move, he took the company private in 2013 with the help of Silver Lake partners. 5 years off the public market was just what they needed. 2018 they went public again on the backs of revenues that had shot up to to $79 billion from a low of around $50 billion in 2016. And they exceeded $94 billion in 2021.  The acquisition of EMC-VMware was probably the most substantial to $67 billion. That put them in the enterprise server market and gave them a compelling offer at pretty much every level of the enterprise stack. Although at this point maybe it remains to be seen if the enterprise server and storage stack is still truly a thing.  A Dell Optiplex costs about the same amount today as it did when Dell sold that first Turbo PC. They can be had cheaper but probably shouldn’t. Adjusted for an average 2.6 percent inflation rate, that brings those first Dell PCs to just north of $2,000 as of the time of this writing. Yet the computer remained the same, with fairly consistent margins. That means the components have gotten half as expensive because they’re made in places with cheaper labor than they were in the early 1980s. That means there are potentially less components, like a fan for certain chips or RAM when they’re memory integrated in a SoC, etc.  But the world is increasingly mobile. Apple, Google, and Microsoft sell computers for their own operating systems now. Dell doesn’t make phones and they aren’t in the top 10 for the tablet market. People don’t buy products from magazines that show up any longer. Now it’s a quick search on Amazon. And looking for a personal computer there, the results right this second (that is, while writing this paragraph) showed the exact same order as vendor market share for 2021: Lenovo, followed by HP, then Dell. All of the devices looked about the same. Kinda’ like those beige injection-molded devices looked about the same.  HP couldn’t have such a large company exist under one roof and eventually spun HP Enterprise out into its own entity. Dell sold Perot Systems to NTT Docomo to get the money to buy EMC on leverage. Not only do many of these companies have products that look similar, but their composition does as well. What doesn’t look similar is Michael Dell. He’s worth just shy of $60 billion dollars (according to the day and the markets). His book, Direct From Dell is one of the best looks at the insides of a direct order mail business making the transition to early commerce one can find. Oh, and it’s not just him and some friends in a dorm room. It’s 158,000 employees who help make up over a $42 billion market cap. And helped generations of people afford personal computers. That might be the best part of such a legacy.
2/4/202224 minutes, 24 seconds
Episode Artwork

Bill Atkinson's HyperCard

We had this Mac lab in school. And even though they were a few years old at the time, we had a whole room full of Macintosh SEs. I’d been using the Apple II Cs before that and these just felt like Isaac Asimov himself dropped them off just for me to play with. Only thing: no BASIC interpreter. But in the Apple menu, tucked away in the corner was a little application called HyperCard. HyperCard wasn’t left by Asimov, but instead burst from the mind of Bill Atkinson. Atkinson was the 51st employee at Apple and a former student of Jeff Raskin, the initial inventor of the Mac before Steve Jobs took over. Steve Jobs convinced him to join Apple where he started with the Lisa and then joined the Mac team until he left with the team who created General Magic and helped bring shape to the world of mobile devices. But while at Apple he was on the original Mac team developing the menu bar, the double-click, Atkinson dithering, MacPaint, QuickDraw, and HyperCard.  Those were all amazing tools and many came out of his work on the original 1984 Mac and the Lisa days before that. But HyperCard was something entirely different. It was a glimpse into the future, even if self-contained on a given computer. See, there had been this idea floating around for awhile.  Vannevar Bush initially introduced the world to a device with all the world’s information available in his article “As We May Think” in 1946. Doug Engelbart had a team of researchers working on the oN-Line System that saw him give “The Mother of All Demos in 1968” where he showed how that might look, complete with a graphical interface and hypertext, including linked content. Ted Nelson introduced furthered the ideas in 1969 of having linked content, which evolved into what we now call hyperlinks. Although Nelson thought ahead to include the idea of what he called transclusions, or the snippets of text displayed on the screen from their live, original source.  HyperCard built on that wealth of information with a database that had a graphical front-end that allowed inserting media and a programming language they called HyperTalk. Databases were nothing new. But a simple form creator that supported graphics and again stressed simple, was new. Something else that was brewing was this idea of software economics. Brooks’ Law laid it out but Barry Boehm’s book on Software Engineering Economics took the idea of rapid application development another step forward in 1981. People wanted to build smaller programs faster. And so many people wanted to build tools that we needed to make it easier to do so in order for computers to make us more productive. Against that backdrop, Atkinson took some acid and came up with the idea for a tool he initially called WildCard. Dan Winkler signed onto the project to help build the programming language, HyperTalk, and they got to work in 1986. They changed the name of the program to HyperCard and released it in 1987 at MacWorld. Regular old people could create programs without knowing how to write code. There were a number of User Interface (UI) components that could easily be dropped on the screen, and true to his experience there was panel of elements like boxes, erasers, and text, just like we’d seen in MacPaint. Suppose you wanted a button, just pick it up from the menu and drop it where it goes. Then make a little script using the HyperText that read more like the English language than a programming language like LISP.  Each stack might be synonymous with a web page today. And a card was a building block of those stacks. Consider the desktop metaphor extended to a rolodex of cards. Those cards can be stacked up. There were template cards and if the background on a template changed, that flowed to each card that used the template, like styles in Keynote might today. The cards could have text fields, video, images, buttons, or anything else an author could think of. And the author word is important. Apple wanted everyone to feel like they could author a hypercard stack or program or application or… app. Just as they do with Swift Playgrounds today. That never left the DNA. We can see that ease of use in how scripting is done in HyperTalk. Not only the word scripting rather than programming, but how HyperTalk is weakly typed. This is to say there’s no memory safety or type safety, so a variable might be used as an integer or boolean. That either involves more work by the interpreter or compiler - or programs tend to crash a lot. Put the work on the programmers who build programming tools rather than the authors of HyperCard stacks. The ease of use and visual design made Hypercard popular instantly. It was the first of its kind. It didn’t compile at first, although larger stacks got slow because HyperTalk was interpreted, so the team added a just-in-time compiler in 1989 with HyperCard 2.0. They also added a debugger.  There were some funny behaviors. Like some cards could have objects that other cards in a stack didn’t have. This led to many a migration woe for larger stacks that moved into modern tools. One that could almost be considered HyperCard 3, was FileMaker. Apple spun their software business out as Claris, who bought Noshuba software, which had this interesting little database program called Nutshell. That became FileMaker in 1985. By the time HyperCard was ready to become 3.0, FileMaker Pro was launched in 1990.  Attempts to make Hypercard 3.0 were still made, but Hypercard had its run by the mid-1990s and died a nice quiet death. The web was here and starting to spread. The concept of a bunch of stacks on just one computer had run its course. Now we wanted pages that anyone could access. HyperCard could have become that but that isn’t its place in history. It was a stepping stone and yet a milestone and a legacy that lives on. Because it was a small tool in a large company. Atkinson and some of the other team that built the original Mac were off to General Magic. Yet there was still this idea, this legacy.  Hypercard’s interface inspired many modern applications we use to create applications. The first was probably Delphi, from Borland. But over time Visual Studio (which we still use today) for Microsoft’s Visual Basic. Even Powerpoint has some similarities with HyperCard’s interface. WinPlus was similar to Hypercard as well. Even today, several applications and tools use HyperCard’s ideas such as HyperNext, HyperStudio, SuperCard, and LiveCode. HyperCard also certainly inspired FileMaker and every Apple development environment since - and through that, most every tool we use to build software, which we call the IDE, or Integrated Development Environment. The most important IDE for any Apple developer is Xcode. Open Xcode to build an app and look at Interface Builder and you can almost feel Bill Atkinson’s pupils dilated pupils looking back at you, 10 hours into a trip. And within those pupils visions - visions of graphical elements being dropped into a card and people digitized CD collections, built a repository for their book collection, put all the Grateful Dead shows they’d recorded into a stack, or even built an application to automate their business. Oh and let’s not forget the Zine, or music and scene magazines that were so popular in the era that saw photocopying come down in price. HyperCard made for a pretty sweet Zine.  HyperCard sprang from a trip when the graphical interface was still just coming into its own. Digital computing might have been 40 years old but the information theorists and engineers hadn’t been as interested in making things easy to use. They wouldn’t have been against it, but they weren’t trying to appeal to regular humans. Apple was, and still is. The success of HyperCard seems to have taken everyone by surprise. Apple sold the last copy in 2004, but the legacy lives on. Successful products help to mass- Its success made a huge impact at that time as well on the upcoming technology. Its popularity declined in the mid-1990s and it died quietly when Apple sold its last copy in 2004. But it surely left a legacy that has inspired many - especially old-school Apple programmers, in today’s “there’s an app for that” world.
1/29/202214 minutes, 22 seconds
Episode Artwork

How Ruby Got Nice

As with many a programming language, ruby was originally designed as a teaching language - to teach programming to students at universities. From there, it is now used to create various programs, including games, interfaces for websites, scripts to run on desktop computers, backend REST endpoints, and business software. Although ruby is used for web development more than anything else. It has an elegant syntax that makes it easy to read the code; this is one of the reasons why Ruby is so popular, especially with beginners (after all it was designed to teach programming). Yukihiro Matsumoto, or Mats for short, originally developed the ruby's programming language in the 1990s. Ruby was initially designed as an interpreted scripting language. That first interpreter, MRI, or Mats’s Ruby Interpreter, spread quickly. In part because he’s nice. In fact, he’s so nice that the motto MINASWAN, or “Matz is nice and so we are nice.” Juxtapose this against some of the angrier programmers who develop their own languages. And remember, it was a teach language. And so he named ruby  after a character he encountered in a children's book. Or because it was a birthstone. Or both. He graduated from the University of Tsukuba and worked on compilers before writing a mail agent in Emacs Lisp. Having worked with Lisp and Perl and Python, he was looking for a language that was truly object-oriented from the ground up. He came up with the idea in 1993 of another Lisp at the core, but something that used objects like Smalltalk. That would allow developers to write less cyclomatically complex code. And yet he wanted to provide higher-order functions for routine tasks like Perl and Python did. Just with native objects rather than those bolted on the side. And he wanted to do so in as consistent a manner as possible. Believe it or not that meant dynamic typing. And garbage collection for free. And literal notation for some things like arrays and regular expressions while allowing for dynamic reflection for meta programming and allowing for everything to be an expression. The syntax is similar to a python or a perl and yet whitespace in things like indentation doesn’t play a part. It’s concise and the deep thinking that goes into making something concise can be incredible. And yet freeing.  The first version of Ruby was released in 1995 and allowed programs to be concise, so written with fewer lines of code than would have been possible with other languages at the time. And yet elegant. In 1996, David Flanagan and Jim Weirich grabbed the MRI interpreter and started using ruby for real projects. And so ruby expanded outside of Japan.  As the popularity grew, Matz founded his own company called Object Technology Inc, This allowed him to continue developing Ruby while making money. After all, programmers gotta’ eat too. In 2006 Matsumoto committed the first version of what would eventually become Rails on Version Control Systems (VCS), a precursor git.  Ruby is written in C, which means it has access to most underlying operating systems given the right API access. It has a vast dictionary with nearly 1 million entries. It can often be found in many event-driven frameworks, with the most popular being Ruby on Rails, a server-side web application framework developed by David Heinemeier Hansson of Basecamp in 2004. Other frameworks include Sinatra (which came in 2007), Roda, Camping (which comes in at a whopping 4k in size), and Padrino. And Ramaze and Merb and Goliath. Each has their own merits.  These libraries help developers code faster, easier, and more efficiently than if they had to write all the server-side code from scratch. Another aspect of Ruby that made it popular is a simple package manager. RubyGems came about in 2003. Here, we lay out a simple structure that includes a README, a gem spec with info about the gem, a lib directory (the code for the gem), a test directory, and a makefile for Ruby they call a Rake. This way the developer of the gem does everything needed to be able to call them in their code. And so there are now well over 100,000 gems out there. Not all work with all the interpreters. Ruby went from 1.0 in 1996 to 1.2 in 1998 to 1.4 in 1999 and 1.6 in 2000. Then to 1.8 in 2003 and by then it was gettin popular and ready to get standardized. This always slows down changes. So it went to become an ISO standard in 2012 - the hallmark if you will that a language is too big to fail. Ruby 2 came along the next year with nearly full backwards compatibility. And then 3 came in 2020 in order to bring just-in-time compilation, which can make the runtime faster than just interpreting. And unlike the XRuby variant, no need to do java-style compilation. Still, not all ruby tooling needs to be compiled. Ruby scripts can be loaded in tools like Amazon’s Lambda service or Google Cloud Functions. From there, it can talk to tools like MySQL and MongoDB. And it’s fun. I mean, Matz uses the word fun. And ruby can present a challenge that to experienced programmers might be seen as fun. Because anything you can do with other languages, you can do with ruby. Might not get as much for free as with a spring security for Java, but it’s still an excellent language and sometimes I can’t help but wonder if we shouldn’t get so much for free with certain lanuages. Matz is now the chief architect of Heroku. He has since written a slimmed down version of ruby called mruby and another language called streem. He also wrote a few books on ruby. Because you know, he’s nice.
1/24/20229 minutes, 12 seconds
Episode Artwork

Email: From Time Sharing To Mail Servers To The Cloud

With over 2.6 billion active users ad 4.6billion active accounts email has become a significant means of communication in the business, professional, academic, and personal worlds. Before email we had protocols that enabled us to send messages within small splinters of networks. Time Sharing systems like PLATO at the University of Champaign-Urbana, DTSS at Dartmouth College, BerkNet at the University of California Berkeley, and CTTS at MIT pioneered electronic communication. Private corporations like IBM launched VNET We could create files or send messages that were immediately transferred to other people. The universities that were experimenting with these messaging systems even used some of the words we use today. MIT’s CTSS used the MAIL program to send messages. Glenda Schroeder from there documented that messages would be placed into a MAIL BOX in 1965. She had already been instrumental in implementing the MULTICS shell that would later evolve into the Unix shell. Users dialed into the IBM 7094 mainframe and communicated within that walled garden with other users of the system. That was made possible after Tom Van Vleck and Noel Morris picked up her documentation and turned it into reality, writing the program in MAD or the Michigan Algorithm Decoder. But each system was different and mail didn’t flow between them. One issue was headers. These are the parts of a message that show what time the message was sent, who sent the message, a subject line, etc. Every team had different formats and requirements. The first attempt to formalize headers was made in RFC 561 by Abhay Bhutan and Ken Pogran from MIT, Jim White at Stanford, and Ray Tomlinson. Tomlinson was a programmer at Bolt Beranek and Newman. He defined the basic structure we use for email while working on a government-funded project at ARPANET (Advanced Research Projects Agency Network) in 1971. While there, he wrote a tool called CYPNET to send various objects over a network, then ported that into the SNDMSG program used to send messages between users of their TENEX system so people could send messages to other computers. The structure he chose was Username@Computername because it just made sense to send a message to a user on the computer that user was at. We still use that structure today, although the hostname transitioned to a fully qualified domain name a bit later. Given that he wanted to route messages between multiple computers, he had a keen interest in making sure other computers could interpret messages once received. The concept of instantaneous communication between computer scientists led to huge productivity gains and new, innovative ideas. People could reach out to others they had never met and get quick responses. No more walking to the other side of a college campus. Some even communicated primarily through the computers, taking terminals with them when they went on the road. Email was really the first killer app on the networks that would some day become the Internet. People quickly embraced this new technology. By 1975 almost 75% of the ARPANET traffic was electronic mails, which provided the idea to send these electronic mails to users on other computers and networks. Most universities that were getting mail only had one or two computers connected to ARPANET. Terminals were spread around campuses and even smaller microcomputers in places. This was before the DNS (Domain Name Service), so the name of the computer was still just a hostname from the hosts file and users needed to know which computer and what the correct username was to send mail to one another. Elizabeth “Jake” Feinler had been maintaining a hosts file to keep track of computers on the growing network when her employer Stanford was just starting the NIC, or Network Information Center. Once the Internet was formed that NIC would be the foundation or the InterNIC who managed the buying and selling of domain names once Paul Mockapetris formalized DNS in 1983. At this point, the number of computers was increasing and not all accepted mail on behalf of an organization. The Internet Service Providers (ISPs) began to connect people across the world to the Internet during the 1980s and for many people, electronic mail was the first practical application they used on the internet. This was made easier by the fact that the research community had already struggled with email standards and in 1981 had defined how servers sent mail to one another using the Simple Mail Transfer Protocol, or SMTP, in RFC 788, updated in 1982 with 821 and 822. Still, the computers at networks like CSNET received email and users dialed into those computers to read the email they stored. Remembering the name of the computer to send mail to was still difficult. By 1986 we also got the concept routing mail in RFC 974 from Craig Partridge. Here we got the first MX record. Those are DNS records that define the computer that received mail for a given domain name. So stanford.edu had a single computer that accepted mail for the university. These became known as mail servers. As the use of mail grew and reliance on mail increased, some had multiple mail servers for fault tolerance, for different departments, or to split the load between servers. We also saw some split various messaging roles up. A mail transfer agent, or MTA, sent mail between different servers. The received field in the header is stamped with the time the server acting as the MTA got an email. MTAs mostly used port 25 to transfer mail until SSL was introduced when port 587 started to be used for encrypted connections. Bandwidth and time on these computers was expensive. There was a cost to make a phone call to dial into a mail provider and providers often charged by the minute. So people also wanted to store their mail offline and then dial in to send messages and receive messages. Close enough to instant communication. So software was created to manage email storage, which we call a mail client or more formally a Mail User Agent, or MUA. This would be programs like Microsoft Outlook and Apple Mail today or even a web mail client as with Gmail. POP, or Post Office Protocol was written to facilitate that transaction in 1984. Receive mail over POP and send over SMTP. POP evolved over the years with POPv3 coming along in 1993. At this point we just needed a username and the domain name to send someone a message. But the number of messages was exploding. As were the needs. Let’s say a user needed to get their email on two different computers. POP mail needed to know to leave a copy of messages on servers. But then those messages all showed up as new on the next computer. So Mark Crispin developed IMAP, or Internet Message Access Protocol, in 1986, which left messages on the server and by IMAPv4 in the 1990s, was updated to the IMAPv4 we use today. Now mail clients had a few different options to use when accessing mail. Those previous RFCs focused on mail itself and the community could use tools like FTP to get files. But over time we also wanted to add attachments to emails so MIME, or Multipurpose Internet Mail Extensions became a standard with RFC 1341 in 1993. Those mail and RFC standards would evolve over the years to add better support for encapsulations and internationalization. With the more widespread use of electronic mail, the words were shortened and to email and became common in everyday conversations. With the necessary standards, the next few years saw a number of private vendors jump on the internet bandwagon and invest in providing mail to customers America Online added email in 1993, Echomail came along in 1994, Hotmail added advertisements to messages, launching in 1996, and Yahoo added mail in 1997. All of the portals added mail within a few years. The age of email kicked into high gear in the late 1990s, reaching 55 million users in 1997 and 400 million by 1999. During this time having an email address went from a luxury or curiosity to a societal and business expectation, like having a phone might be today. We also started to rely on digital contacts and calendars, and companies like HP released Personal Information Managers, or PIMs. Some companies wanted to sync those the same way they did email, so Microsoft Exchange was launched in 1996. That original concept went all the way back to PLATO in the 1960s with Dave Wooley’s PLATO NOTES and was Ray Ozzie’s inspiration when he wrote the commercial product that became Lotus Notes in 1989. Microsoft inspired Google who in turn inspired Microsoft to take Exchange to the cloud with Outlook.com. It hadn’t taken long after the concept of sending mail between computers was possible that we got spam. Then spam blockers and other technology to allow us to stay productive despite the thousands of messages from vendors desperately trying to sell us their goods through drip campaigns. We’ve even had legislation to limit the amount of spam, given that at one point over 9 out of 10 emails was spam. Diligent efforts have driven that number down to just shy of a third at this point. Email is now well over 40 years old and pretty much ubiquitous around the world. We’ve had other tools for instant messaging, messaging within every popular app, and group messaging products like bulletin boards online and now group instant messaging products like Slack and Microsoft Teams. We even have various forms of communication options integrated with one another. Like the ability to initiate a video call within Slack or Teams. Or the ability to toggle the Teams option when we send an invitation for a meeting in Outlook. Every few years there’s a new communication medium that some think will replace email. And yet email is as critical to our workflows today as it ever was.
1/15/202216 minutes, 25 seconds
Episode Artwork

The Teletype and TTY

Teleprinters, sometimes referred to as teletypes based on the dominance of the Tyletype corporation in their hayday, are devices that send or receive written transmissions over a wire or over radios. Those have evolved over time to include text and images. And while it may seem as though their development corresponds to the telegraph, that’s true only so far as discoveries in electromagnetism led to the ability to send tones or pules over wires once there was a constant current. That story of the teletype evolved through a number of people in the 1800s. The modern telegraph was invented in 1835 and taken to market a few years later. Soon after that, we were sending written messages encoded and typed on what we called a teletype machine, or teletypewriter if you will. Those were initially invented by a German inventor, Friedrich König in 1837, the same year Cooke and Wheatstone got their patent on telgraphy in England, and a few years before they patented automatic printing. König figured out how to send messages over about 130 miles. Parts of the telegraph were based on his work. But he used a wire per letter of the alphabet and Samuel Morse used a single wire and encoded messages with the Morse code he developed. Alexander Bain developed a printing telegraph that used electromagnets that turned clockworks. But keep in mind that these were still considered precision electronics at the time and human labor to encode, signal, receive, and decode communications were still cheaper. Therefore, the Morse telegraph service that went operational in 1846 became the standard. Meanwhile Royal Earl House built a device that used piano keyboards to send letters, which had a shift register to change characters being sent. Thus predating the modern typewriter, developed in 1878, by decades. Yet, while humans were cheaper, machines were less prone to error, unless of course they broke down. Then David Edward Hughes developed the first commercial teletype machine known as the Model 11 in 1855 to 1856. A few small telegraph companies then emerged to market the innovation, including Wester Union Telegraph company. Picking up where Morse left off, Émile Baudot developed a code that consisted of five units, that became popular in France and spread to England in 1897 before spreading to the US. That’s when Donald Murray added punching data into paper tape for transmissions and incremented the Baudot encoding scheme to add control characters like carriage returns and line feeds. And some of the Baudot codes and Murray codes are still in use. The ideas continued to evolve. In 1902, Charles Krum invented something he called the teletypewriter, picking up on the work started by Frank Pearne and funded by Joy Morton of the Morton Salt company. He filed a patent for his work. He and Morton then formed a new company called the Morkrum Printing Telegraph. Edward Kleinschmidt had filed a similar patent in 1916 so they merged the two companies into the Morkrump-Kleinschmidt Company in 1925 but to more easily market their innovation changed the name to the Teletype Corporation in 1928, then selling to the American Telegraph and Telephone Company, or AT&T, for $30M. And so salt was lucrative, but investing salt money netted a pretty darn good return as well. Teletype Corporation produced a number of models over the next few decades. The Model 15 through 35 saw an increase in the speed messages could be sent and improved encoding techniques. As the typewriter became a standard, the 8.5 by 11 inch came as a means of being most easily compatible for those typewriters. The A standard was developed so A0 is a square meter, A1 is half that, A2, half that, and so on, with A4 becoming a standard paper size in Europe. But teletypes often had continual feeds and so while they had the same width in many cases, paper moved from a small paper tape to a longer roll of paper cut the same size as letter paper. Decades after Krum was out of the company, the US Naval Observatory built what they called a Krum TTY to transmit data over radio, naming their device after him. Now, messages could be sent over a telegraph wire and wirelessly. Thus by 1966 when the Inktronic shipped and printed 1200 characters a minute, it was able to print in baud or ASCII, which Teletype had developed for guess who, the Navy. But they had also developed a Teletype they called the Dataspeed with what we think of as a modem today, which evolved into the Teletype 33, the first Teletype to be consistently used with a computer. The teletype could send data to a computer and receive information that was printed in the same way information would be sent to another teletype operator who would respond in a printout. Another teletype with the same line receives that signal. When hooked to a computer though, the operator presses one of the keys on the teletype keyboard, it transmits an electronic signal. Over time, those teletypes could be installed on the other side of a phone line. And if a person could talk to a computer, why couldn’t two computers talk to one another? ASCII was initially published in 1963 so computers could exchange information in a standardized fashion. Bell Labs was involved and so it’s no surprise we saw ASCII show up within just a couple of years on the Teletype. ASCII was a huge win. Teletype sold over 600,000 of the 32s and 33s. Early video screens cost over $10,000 so interactive computing meant sending characters to a computer, which translated the characters into commands, and those into machine code. But the invention of the integrated circuit, MOSFET, and microchip dropped those prices considerably. When screens dropped in price enough, and Unix came along in 1971, also from the Bell system, it’s no surprised that the first shells were referred to as TTY, short for teletype. After all, the developers and users were often literally using teletypes to connect. As computing companies embraced time sharing and added the ability to handle multiple tasks those evolved into the ability to invoke multiple TTY sessions as a given user, thus while waiting for a task to complete we could do another task. And so we got tty1, tty2, tty3, etc. The first GUIs were then effectively macros or shell scripts that were called by clicking a button. And those evolved so they weren’t obfuscating the shell but instead now we open a terminal emulator in most modern operating systems not to talk to the shell directly but to send commands to the emulator that interprets them in more modern languages. And yet run tty and we can still see the “return user’s terminal name” to quote the man page. Today we interact with computers in a very different way than we did over teletypes. We don’t send text and receive the output in a a print-out any longer. Instead we use monitors that allow us to use keyboards to type out messages through the Internet as we do over telnet and then ssh using either binary or ASCII codes. The Teletype and typewriter evolved into today's keyboard, which offers a faster and more efficient way to communicate. Those early CTSS then Unix C programs that evolved into ls and ssh and cat are now actions performed in graphical interfaces or shells. The last remaining teletypes are now used in airline telephone systems. And following the breakup of AT&T, Teletype Corporation need finally in 1990, as computer terminals evolved into a different direction. Yet we still see their remnants in everyday use.
1/10/202213 minutes
Episode Artwork

A History of Esports

It’s human nature to make everything we do competitive. I’ve played football, ran track at times, competed in hacking competitions at Def Con, and even participated in various gaming competitions like Halo tournaments. I always get annihilated by kids who had voices that were cracking, but I played! Humans have been competing in sports for thousands of years. The Lascaux in France shows people sprinting over 15,000 years ago. The Egyptians were bowling in the 5,000s BCE. The Sumerians were wrestling 5,000 years ago. Mesopotamian art shows boxing in the second or third millennium BCE. The Olmecs in Mesoamerican societies were playing games with balls around the same time. Egyptian monuments show a number of individual sports being practiced in Egypt as far back as 2,000 BCE. The Greeks evolved the games first with the Minoans and Mycenaeans between 1,500 BCE and 1,000BCE and then they first recorded their Olympic games in 776 BCE, although historians seem to agree the games were practiced at least 500 years before that evolving potentially from funeral games. Sports competitions began as ways to showcase an individuals physical prowess. Weight lifting, discus, whether individual or team sports, sports rely on physical strength, coordination, repetitive action, or some other feat that allows one person or team to stand out. Organized team sports first appeared in ancient times. The Olmecs in Mesoamerica but Hurling supposedly evolved past 1000 BCE, although written records of that only begin around the 16th century and it could be that was borrowed through the Greek game harpaston when the Romans evolved it into the game harpastum and it spread with Roman conquests. But the exact rules and timelines of all of these are lost to written history. Instead, written records back up that western civilization team sports began with polo appearing about 2,500 years ago in Persia. The Chinese gave us a form of kickball they called cuju, around 200 BCE. Football, or soccer for the American listeners, started in 9th century England but evolved into the game we think of today in the 1850s, then a couple of decades later to American football. Meanwhile, cricket came around in the 16th century and then hockey and baseball came along in the mid 1800s with basketball arriving in the 1890s. That’s also around the same time the modern darts game was born, although that started in the Middle Ages when troops threw arrows or crossbow bolts at wine barrels turned on their sides or sections of tree trunks. Many of these sports are big business today, netting multi-billion dollar contracts for media rights to show and stream games, naming rights to stadiums for hundreds of millions, and players signing contracts for hundreds of millions across all major sports. There’s been a sharp increase in sports contracts since the roaring 1920s, rising steadily until the television started to show up in homes around the world until ESPN solidified a new status in our lives when it was created in 1979. Then came the Internet and the money got crazy town. All that money leads the occasional entrepreneurial minded sports enthusiast to try something new. We got the World Wrestling Body in the 1950s, which evolved out of Jim McMahon’s father’s boxing promotions put him working with Toots Mondt on what they called Western Style Wrestling. Beating people up has been around since the dawn of life but became an official sport when UFC 1 was launched in 1993. We got the XFL in 1999. So it’s no surprise that we would take a sport that requires hand-eye coordination and turn that into a team endeavor. That’s been around for a long time, but we call it Esports today. Video Game Competitions Competing in video games is as almost as old as, well, video games. Spacewar! was written in 1962 and students from MIT competed with one another for dominance of deep space, dogfighting little ships, which we call sprites today, into oblivion. The game spread to campuses and companies as the PDP minicomputers spread. Countless hours spent playing and by 1972, there were enough players that they held the first Esports competition, appropriately called the Intergalactic Spacewar! Olympics. Of course, Steward Brand would report on that for Rolling Stone, having helped Mouse inventor Doug Englebart with the “Mother of All Demos” just four years before. Pinball had been around since the 1930s, or 1940s with flippers. They could be found around the world by the 1970s and 1972 was also the first year there was a Pinball World Champion. So game leagues were nothing new. But Brand and others, like Atari founder Nolan Bushnell knew that video games were about to get huge. Tennis was invented in the 1870s in England and went back to 11th century France. Tennis on a screen would make loads of sense as well when Tennis For Two debuted in 1958. So when Pong came along in 1972, the world (and the ability to mass produce technology) was ready for the first video game hit. So when people flowed into bars first in the San Francisco Bay Area, then around the country to play Pong, it’s no surprise that people would eventually compete in the game. From competing in billiards to a big game console just made sense. Now it was a quarter a game instead of just a dart board hanging in the corner. And so when Pong went to home consoles of course people competed there as well. Then came Space Invaders in 1978. By 1980 we got the first statewide Space Invaders competition, and 10,000 players showed up. The next year there was a Donkey Kong tournament and Billy Mitchell set the record for the game at 874,300 that stood for 18 years. We got the US National Video Game Team in 1983 and competitions for arcade games sprung up around the world. A syndicated television show called Starcade even ran to show competitions, which now we might call streaming. And Tron came in 1982. Then came the video game crash of 1983. But games never left us. The next generation of consoles and arcade games gave us competitions and tournaments for Street Fighter and Mortal Kombat then first-person game like Goldeneye and other first-person shooters later in the decade, paving the way for games like Call of Duty and World of Warcraft. Then in 1998 a legendary StarCraft 2 tournament was held and 50 million people around the world tuned in on the Internet. That’s a lot of eyeballs. Team options were also on the rise. Netter had been written to play over the Internet by 16 players at once. Within a few years, massive multiplayers could have hundreds of players duking it out in larger battle scenes. Some were recorded and posted to web pages. There was appetite for tracking scores for games and competing and even watching games, which we’ve all done over the shoulders of friends since the arcades and consoles of old. Esports and Twitch As the 2000s came, Esports grew in popularity. Esports is short for the term electronic sports, and refers to competitive video gaming, which includes tournaments and leagues. Let’s set aside the earlier gaming tournaments and think of those as classic video games. Let’s reserve the term Esports for events held after 2001. That’s because the World Cyber Games was founded in 2000 and initially held in 2001, in Seoul, Korea (although there was a smaller competition in 2000). The haul was $300,000 and events continue on through the current day, having been held in San Francisco, Italy, Singapore, and China. Hundreds of people play today. That started a movement. Major League Gaming (MLG) came along in 2002 and is now regarded as one of the most significant Esports hosts in the world. The Electronic Sports World Cup came in 2003 were the first tournaments, which were followed by the introduction of ESL Intel Extreme Master in 2007 and many others. The USA Network broadcast their first Halo 2 tournament in 2006. We’ve gone from 10 major tournaments held in 2000 to an incalculable number today. That means more teams. Most Esports companies are founded by former competitors, like Cloud9, 100 Thieves, and FaZeClan. Team SoloMid is the most valuable Esports organization. Launched by League of Legends star Dan Dinh and his brother in 2009, and is now worth over $400 million and has fielded teams like ZeRo for Super Smash Brothers, Excelerate Gaming for Rainbow Six Seige, Team Dignitas for Counter-Strike: Global Offensive, and even chess grandmaster Hikaru Nakamura. The analog counterpart would be sports franchises. Most of those were started by athletic clubs or people from the business community. Gaming has much lower startup costs and thus far has been more democratic in the ability to start a team with higher valuations. Teams play in competitions held by leagues, of which There seems to be new ones all the time. The NBA 2K League and the Overwatch League are two new leagues that have had early success. One reason for teams and leagues like this is naming and advertising rights. Another is events like The International 2021, with a purse of over $40M. The inaugural League of Legends World Championship took place in 2011. In 2013 another tournament was held in the Staples Center in Los Angeles (close to their US offices). Tickets for the event sold out within minutes. The purse for that was originally $100,000 and has since risen to over $7M. But others are even larger. Arena of Valor tournament Honor of Kings World Champion Cup is $7.7M and Fortnite World Cup Finals has gone as high as $15M. One reason for the leagues and teams is that companies that make games want to promote their games. The video game business is almost an 86 billion dollar industry. Another is that people started watching other people play on YouTube. But then YouTube wasn’t really purpose-built for gaming. Streamers made due using cameras to stream images of themselves in a picture-in-picture frame but that still wasn’t optimal. Esports had been broadcast (the original form of streaming) before but streaming wasn't all that commercially successful until the birth of Twitch in 2011. YouTube had come along in 2005 and Justin Kan and Emmett Shear created Justin.tv in 2007 as a place for people to broadcast video, or stream, online. They started with just one channel: Justin’s life. Like 24 by 7 life. They did Y Combinator and managed to land an $8M seed round. Justin had a camera mounted to his hat, and left that outside the bathroom since it wasn’t that kind of site. They made a video chat system and not only was he streaming, but he was interacting with people on the other side of the stream. It was like the Truman Show, but for reals. A few more people joined up, but then came other sites to provide this live streaming option. They added forums, headlines, comments, likes, featured categories of channels, and other features but just weren’t hitting it. One aspect was doing really well: gaming. They moved that to a new site in 2011 and called that Twitch. This platform allowed players to stream themselves and their games. And they could interact with their viewers, which gave the entire experience a new interactive paradigm. And it grew fast with the whole thing being rebranded as Twitch in 2014. Amazon bought Twitch in 2014 for $1B. They made $2.3 Billion in 2020 with an average of nearly 3 million concurrent viewers watching nearly 19 billion hours of content provided monthly by nearly 9 million streamers. Other services like Youtube Gaming have come and gone but Twitch remains the main way people watch others game. ESPN and others still have channels for Esports, but Twitch is purpose-built for gaming. And watching others play games is no different than Greeks showing up for the Olympics or watching someone play pool or watching Liverpool play Man City. In fact, the money they make is catching up. Platforms like Twitch allow professional gamers and those who announce the games to to become their own unique class of celebrities. The highest paid players have made between three and six million dollars, with the top 10 living outside the US and making their hauls from Dota 2. Others have made over a million playing games like Counter-Strike, Fortnite, League of Legends, and Call of Duty. None are likely to hold a record for any of those games for 18 years. But they are likely to diversify their sources of income. Add a YouTube channel, Twitch stream, product placements, and appearances - and a gamer could be looking at doubling what they bring in from competitions. Esports has come far but has far further to go. The total Esports market was just shy of $1B in 2020 and expected tor each $2.5B in 2025 (which the pandemic may push even faster). Not quite the 100 million that watch the Super Bowl every year or the half billion that tune into the World Cup finals but growing at a faster rate than the Super Bowl, which has actually declined in the past few years. And the International Olympic Committee recognized the tremendous popularity of Esports throughout the world in 2017 and left open the prospect of Esports becoming an Olympic sport in the future (although with the number of vendors involved that’s hard to imagine happening). Perhaps some day when archaeologists dig up what we’ve left behind, they’ll find some Egyptian Obelisk or gravestone with a controller and a high score. Although they’ll probably just scoff at the high score, since they already annihilated that when they first got their neural implants and have since moved on to far better games! Twitch is young in the context of the decades of history in computing. However, the impact has been fast and along with Esports shows us a windows into how computing has reshaped entire ways we week not only entertainment, but also how we make a living. In fact, the US Government recognized League of Legends as a sport as early as 2013, allowing people to get Visas to come into the US and play. And where there’s money to be made, there’s betting and abuse. 2010 saw SaviOr and some of the best Starcraft players to ever play embroiled in a match-fixing scandal. That almost destroyed the Esports gaming industry. And yet as with the Video Game Crash of 1983, the industry has always bounced back, at magnitudes larger than before.
1/8/202222 minutes, 34 seconds
Episode Artwork

Of Heath Robinson Contraptions And The Colossus

The Industrial Revolution gave us the rise of factories all over the world in the 1800s. Life was moving faster and we were engineering complex solutions to mass produce items. And many expanded from there to engineer complex solutions for simple problems. Cartoonist Heath Robinson harnessed the reaction from normal humans to this changing world in the forms of cartoons and illustrations of elaborate machines meant to accomplish simple tasks. These became known as “Heath Robinson contraptions” and were a reaction to the changing and increasingly complicated world order as much as anything. Just think of the rapidly evolving financial markets as one sign of the times! Following World War I, other cartoonists made similar cartoons. Like Rube Goldberg, giving us the concept of Rube Goldberg machines in the US. And the very idea of breaking down simple operations into Boolean logic from those who didn’t understand the “why” would have seemed preposterous. I mean a wheel with 60 teeth or a complex series of switches and relays to achieve the same result? And yet with flip-flop circuits one would be able to process infinitely faster than it would take that wheel to turn with any semblance of precision. The Industrial Revolution of our data was to come. And yet we were coming to a place in the world where we were just waking up to the reality of moving from analog to digital as Robinson passed away in 1944 with a series of electromechanical computers named after Robinson and then The Colossus. These came just one year after Claude Shannon and Alan Turing, two giants in the early history of computers, met at Bell Labs. And a huge step in that transition was a paper by Alan Turing in 1936 called "On Computable Numbers with an Application to the Entscheidungsproblem.” This would become the basis for a programmable computing machine concept and so before the war, Alan Turing had published papers about the computability of problems using what we now call a Turing machine - or recipes. Some of the work on that paper was inspired by Max Newman, who helped Turing go off to Princeton to work on all the maths, where Turing would get a PhD in 1938. He returned home and started working part-time at the Government Code and Cypher school during the pre-war buildup. Hitler invaded Poland the next year, sparking World War II. The Poles had gotten pretty good with codebreaking, being situated right between world powers Germany and Russia and their ability to see troop movements through decrypted communications was one way they were able to keep forces in optimal locations. And yet the Germans got in there. The Germans had built a machine called the Enigma that also allowed their Navy to encrypt communications. Unable to track their movements, Allied forces were playing a cat and mouse game and not doing very well at it. Turing came up with a new way of decrypting the messages and that went into a new version of the Polish Bomba. Later that year, the UK declared war on Germany. Turing’s work resulted in a lot of other advances in cryptanalysis throughout the war. But he also brought home the idea of an electromechanical machine to break those codes - almost as though he’d written a paper on building machines to do such things years before. The Germans had given away a key to decrypt communications accidentally in 1941 and the codebreakers at Bletchley Park got to work on breaking the machines that used the Lorenz Cipher in new and interesting ways. The work had reduced the amount of losses - but they needed more people. It was time intensive to go through the possible wheel positions or guess at them, and every week meant lives lost. Or they needed more automation of people tasks… So they looked to automate the process. Turing and the others wrote to Churchill directly. Churchill started his memo to General Ismay with “ACTION THIS DAY” and so they were able to get more bombes up and running. Bill Tutte and the codebreakers worked out the logic to process the work done by hand. The same number of codebreakers were able to a ton more work. The first pass was a device with uniselectors and relays. Frank Morrell did the engineering design to process the logic. And so we got the alpha test of an automation machine they called the Tunny. The start positions were plugged in by hand and it could still take weeks to decipher messages. Max Newman, Turing’s former advisor and mentor, got tapped to work on the project and Turing was able to take the work of Polish code breakers and others and add sequential conditional probability to guess at the settings of the 12 wheels of an Enigma machine and thus get to the point they could decipher messages coming out of the German navy on paper. No written records indicate that Turing was involved much in the project beyond that. Max Newman developed the specs, heavily influenced by Turing’s previous work. They got to work on an electro-mechanical device we now call the Heath Robinson. They needed to be able to store data. They used paper tape - which could process a thousand characters per second using photocell readers - but there were two and they had to run concurrently. Tape would rip and two tapes running concurrently meant a lot might rip. Charles Wynn-Williams was a brilliant physicist who worked with electric waves since the late 1920s at Trinity College, Cambridge and was recruited from a project helping to develop RADAR because he’d specifically worked on electronic counters at Cambridge. That work went into the counting unit, counting how many times a function returned a true result. As we saw with Bell Labs, the telephone engineers were looking for ways to leverage switching electronics to automate processes for the telephone exchange. Turing recommended they bring in telephone engineer Tommy Flowers to design the Combining unit, which used vacuum tubes to implement Boolean logic - much as the paper Shannon wrote in 1936 that he gave Turing over tea at Bell labs earlier 1943. It’s likely Turing would have also heard of the calculator George Stibitz of Bell Labs built out of relay switches all the way back in 1937. Slow but more reliable than the vacuum tubes of the era. And it’s likely he influenced those he came to help by collaborating on encrypted voice traffic and likely other projects as much if not more. Inspiration is often best found at the intersectionality between ideas and cultures. Flowers looked to use vacuum tubes where the wheel patterns were produced. This gave one less set of paper tapes and infinitely more reliability. And a faster result. The programs were stored but they were programmable. Input was made using the shift registers from the paper tape and thyratron rings that simulated the bitstream for the wheels. There was a master control unit that handled the timing between the clock, signals, readouts, and printing. It didn’t predate the Von Neumann architecture. But it didn’t not. The switch panel had a group of switches used to define the algorithm being used with a plug-board defining conditions. The combination provided billions of combinations for logic processing. Vacuum tube valves were still unstable but they rarely blew when on, it was the switching process. So if they could have the logic gates flow through a known set of wheel settings the new computer would be more stable. Just one thing - they needed 1,500 valves! This thing would be huge! And so the Colossus Mark 1 was approved by W.G. Radley in 1943. It took 50 people 11 months to build and was able to compute wheel settings for ciphered message tapes. Computers automating productivity at its finest. The switches and plugs could be repositioned and so not only was Colossus able get messages decrypted in hours but could be reprogrammed to do other tasks. Others joined and they got the character reading up to almost 10,000 characters a second. They improved on the design yet again by adding shift registers and got over four times the speeds. It could now process 25,000 characters per second. One of the best uses was to confirm that Hitler got tricked into thinking the attack at Normandy at D-Day would happen elsewhere. And so the invasion of Normandy was safe to proceed. But the ability to reprogram made it a mostly universal computing machine - proving the Turing machine concept and fulfilling the dreams of Charles Babbage a hundred years earlier. And so the war ended in 1945. After the war, The Colossus machines were destroyed - except the two sent to British GHCQ where they ran until 1960. So the simple story of Colossus is that it was a series of computers built in England from 1943 to 1945, at the heart of World War II. The purpose: cryptanalysis - or code breaking. Turing went on to work on the Automatic Computing Engine at the National Physical Laboratory after the war and wrote a paper on the ACE - but while they were off to a quick start in computing in England having the humans who knew the things, they were slow to document given that their wartime work was classified. ENIAC came along in 1946 as did the development of Cybernetics by Norbert Wiener. That same year Max Newman wrote to John Von Neumann (Wiener’s friend) about building a computer in England. He founded the Royal Society Computing Machine Laboratory at Victory University of Manchester, got Turing out to help and built the Manchester Baby, along with Frederic Williams and Thomas Kilburn. In 1946 Newman would also decline becoming Sir Newman when he rejected becoming an OBE, or Officer of the Order of the British Empire, over the treatment of his protege Turing not being offered the same. That’s leadership. They’d go on to collaborate on the Manchester Mark I and Ferranti Mark I. Turing would work on furthering computing until his death in 1954, from taking cyanide after going through years of forced estrogen treatments for being a homosexual. He has since been pardoned post Following the war, Flowers tried to get a loan to start a computer company - but the very idea was ludicrous and he was denied. He retired from the Post Office Research Station after spearheading the move of the phone exchange to an electric, or what we might think of as a computerized exchange. Over the next decade, the work from Claude Shannon and other mathematicians would perfect the implementation of Boolean logic in computers. Von Neumann only ever mentioned Shannon and Turing in his seminal 1958 paper called The Computer And The Brain. While classified by the British government the work on Colossus was likely known to Von Neumann, who will get his own episode soon - but suffice it to say was a physicist turned computer scientist and worked on ENIAC to help study and develop atom bombs - and who codified the von Neumann architecture. We did a whole episode on Turing and another on Shannon, and we have mentioned the 1945 article As We May Think where Vannevar Bush predicted and inspired the next couple generations of computer scientists following the advancements in computing around the world during the war. He too would have likely known of the work on Colossus at Bletchley Park. Maybe not the specifics but he certainly knew of ENIAC - which unlike Colossus was run through a serious public relations machine. There are a lot of heroes to this story. The brave men and women who worked tirelessly to break, decipher, and analyze the cryptography. The engineers who pulled it off. The mathematicians who sparked the idea. The arrival of the computer was almost deterministic. We had work on the Atanasoff-Berry Computer at Iowa State, work at Bell Labs, Norbert Wiener’s work on anti-aircraft guns at MIT during the war, Konrad Zuse’s Z3, Colossus, and other mechanical and electromechanical devices leading up to it. But deterministic doesn’t mean lacking inspiration. And what is the source of inspiration and when mixed with perspiration - innovation? There were brilliant minds in mathematics, like Turing. Brilliant physicists like Wynn-Williams. Great engineers like Flowers. That intersection between disciplines is the wellspring of many an innovation. Equally as important, then there’s a leader who can take the ideas, find people who align with a mission, and help clear roadblocks. People like Newman. When they have domain expertise and knowledge - and are able to recruit and keep their teams inspired, they can change the world. And then there are people with purse strings who see the brilliance and can see a few moves ahead on the chessboard - like Churchill. They make things happen. And finally, there are the legions who carried on the work in theoretical, practical, and in the pure sciences. People who continue the collaboration between disciplines, iterate, and bring products to ever growing markets. People who continue to fund those innovations. It can be argued that our intrepid heroes in this story helped win a war - but that the generations who followed, by connecting humanity and bringing productivity gains to help free our minds to solve bigger and bigger problems will hopefully some day end war. Thank you for tuning in to this episode of the History of Computing Podcast. We hope to cover your contributions. Drop us a line and let us know how we can. And thank you so much for listening. We are so, so lucky to have you.
12/14/202119 minutes, 46 seconds
Episode Artwork

Clifford Stoll and the Cuckoo’s Egg

A honeypot is basically a computer made to look like a sweet, yummy bit of morsel that a hacker might find yummy mcyummersons. This is the story of one of the earliest on the Internet. Clifford Stoll has been a lot of things. He was a teacher and a ham operator and appears on shows. And an engineer at a radio station. And he was an astronomer. But he’s probably best known for being an accidental systems administrator at Lawrence Berkeley National Laboratory who setup a honeypot in 1986 and used that to catch a KGB hacker. It sounds like it could be a movie. And it was - on public television. Called “The KGB, the Computer, and Me.” And a book. Clifford Stoll was an astronomer who stayed on as a systems administrator when a grant he was working on as an astronomer ran out. Many in IT came to the industry accidentally. Especially in the 80s and 90s. Now accountants are meticulous. The monthly accounting report at the lab had never had any discrepancies. So when the lab had a 75 cent accounting error, his manager Dave Cleveland had Stoll go digging into the system to figure out what happened. And yet what he found was far more than the missing 75 cents. This was an error of time sharing systems. And the lab leased out compute time at $300 per hour. Everyone who had accessed the system had an account number to bill time to. Well, everyone except a user named hunter. They disabled the user and then got an email that one of their computers tried to break into a computer elsewhere. This is just a couple years after the movie War Games had been released. So of course this was something fun to dig your teeth into. Stoll combed through the logs and found the account that attempted to break into the computers in Maryland was a local professor named Joe Sventek, now at the University of Oregon. One who it was doubtful made the attempt because he was out town at the time. So Stoll set his computer to beep when someone logged in so he could set a trap for the person using the professors account. Every time someone connected a teletype session, or tty, Stoll checked the machine. Until Sventek connected and with that, he went to see the networking team who confirmed the connection wasn’t a local terminal but had come in through one of the 50 modems through a dial-up session. There wasn’t much in the form of caller ID. So Stoll had to connect a printer to each of the modems - that gave him the ability to print every command the user ran. A system had been compromised and this user was able to sudo, or elevate their privileges. UNIX System V had been released 3 years earlier and suddenly labs around the world were all running similar operating systems on their mainframes. Someone with a working knowledge of Unix internals could figure out how to do all kinds of things. Like add a program to routine housecleaning items that elevated their privileges. They could also get into the passwd file that at the time housed all the passwords and delete those that were encrypted, thus granting access without a password. And they even went so far as to come up with dictionary brute force attacks similar to a modern rainbow table to figure out passwords so they wouldn’t get locked out when the user whose password was deleted called in to reset it again. Being root allowed someone to delete the shell history and given that all the labs and universities were charging time, remove any record they’d been there from the call accounting systems. So Stoll wired a pager into the system so he could run up to the lab any time the hacker connected. Turns out the hacker was using the network to move laterally into other systems, including going from what was ARPANET at the time to military systems on Milnet. The hacker used default credentials for systems and leave accounts behind so he could get back in later. Jaeger means hunter in German and those were both accounts used. So maybe they were looking for a German. Tymenet and Pacbell got involved and once they got a warrant they were able to get the phone number of the person connecting to the system. Only problem is the warrant was just for California. Stoll scanned the packet delays and determined the hacker was coming in from overseas. The hacker had come in through Mitre Corporation. After Mitre disabled the connection the hacker slipped up and came in through International Telephone and Telegraph. Now they knew he was not in the US. In fact, he was in West Germany. At the time, Germany was still divided by the Berlin Wall and was a pretty mature spot for espionage. They confirmed the accounts were indicating they were dealing with a German. Once they had the call traced to Germany they needed to keep the hacker online for an hour to trace the actual phone number because the facilities there still used mechanical switching mechanisms to connect calls. So that’s where the honeypot comes into play. Stoll’s girlfriend came up with the idea to make up a bunch of fake government data and host it on the system. Boom. It worked, the hacker stayed on for over an hour and they traced the number. Along the way, this hippy-esque Cliff Stoll had worked with “the Man.” Looking through the logs, the hacker was accessing information about missile systems, military secrets, members of the CIA. There was so much on these systems. So Stoll called some of the people at the CIA. The FBI and NSA were also involved and before long, German authorities arrested the hacker. Markus Hess, whose handle was Urmel, was a German hacker who we now think broke into over 400 military computers in the 80s. It wasn’t just one person though. Dirk-Otto Brezinski, or DOB, Hans Hübner, or Pengo, and Karl Koch, or Pengo were also involved. And not only had they stolen secrets, but they’d sold them to The KGB using Peter Carl as a handler. Back in 1985, Koch was part of a small group of hackers who founded the Computer-Stammtisch in Hanover. That later became the Hanover chapter of the Chaos Computer Club. Hübner and Koch confessed, which gave them espionage amnesty - important in a place with so much of that going around in the 70s and 80s. He would be found burned by gasoline to death and while it was reported a suicide, that has very much been disputed - especially given that it happened shortly before the trials. DOB and Urmel received a couple years of probation for their part in the espionage, likely less of a sentence given that the investigations took time and the Berlin Wall came down the year they were sentenced. Hübner’s story and interrogation is covered in a book called Cyberpunk - which tells the same story from the side of the hackers. This includes passing into East Germany with magnetic tapes, working with handlers, sex, drugs, and hacker-esque rock and roll. I think I initially read the books a decade apart but would strongly recommend reading Part II of it either immediately before or after The Cukoo’s Egg. It’s interesting how a bunch of kids just having fun can become something far more. Similar stories were happening all over the world - another book called The Hacker Crackdown tells of many, many of these stories. Real cyberpunk stories told by one of the great cyberpunk authors. And it continues through to the modern era, except with much larger stakes than ever. Gorbachev may have worked to dismantle some of the more dangerous aspects of these security apparatuses, but Putin has certainly worked hard to build them up. Russian-sponsored and other state-sponsored rings of hackers continue to probe the Internet, delving into every little possible hole they can find. China hacks Google in 2009, Iran hits casinos, the US hits Iranian systems to disable centrifuges, and the list goes on. You see, these kids were stealing secrets - but after the Morris Worm brought the Internet to its knees in 1988, we started to realize how powerful the networks were becoming. But it all started with 75 cents. Because when it comes to security, there’s no amount or event too small to look into.
12/3/202111 minutes, 38 seconds
Episode Artwork

Buying All The Things On Black Friday and Cyber Monday

The Friday after Thanksgiving to the Monday afterwards is a bonanza of shopping in the United States, where capitalism runs wild with reckless abandon. It’s almost a symbol of a society whose identity is as intertwined with with rampant consumerism as it is with freedom and democracy. We are free to spend all our gold pieces. And once upon a time, we went back to work on Monday and looked for a raise or bonus to help replenish the coffers. But since fast internet connections started to show up in offices in the late 90s the commodification of holiday shopping, the very digitization of materialism. But how did it come to be? The term Black Friday goes back to a financial crisis in 1869 after Jay Gould and Jim Fisk tried to corner the market on Gold. That backfired and led to a Wall Street crash in September of that year. As the decades rolled by, Americans in the suburbs of urban centers had more and more disposable income and flocked to city centers the day after Thanksgiving. Finally, by 1961, the term showed up in Philadelphia where turmoil over the holiday shopping extravaganza inside. And so as economic downturns throughout the 60s and 70s gave way to the 1980s, the term spread slowly across the country until marketers, decided to use it to their advantage and run sales just on that day. Especially the big chains that were by now in cities where the term was common. And many retailers spent the rest of the year in the red and made back all of their money over the holidays - thus they got in the black. The term went from a negative to a positive. Stores opened earlier and earlier on Friday. Some even unlocking the doors at midnight after shoppers got a nice nap in following stuffing their faces with turkey the earlier in the day. As the Internet exploded in the 90s and buying products online picked up steam, marketers of online e-commerce platforms wanted in on the action. See, they considered brick and mortar to be mortal competition. Most of them should have been looking over their shoulder at Amazon rising, but that’s another episode. And so Cyber Monday was born in 2005 when the National Retail Federation launched the term to the world in a press release. And who wanted to be standing in line outside a retail store at midnight on Friday? Especially when the first Wii was released by Nintendo that year and was sold out everywhere early Friday morning. But come Cyber Monday it was all over the internet. Not only that, but one of Amazon’s top products that year was the iPod. And the DS Lite. And World of Warcraft. Oh and that was the same year Tickle Me Elmo was sold out everywhere. But available on the Internets. The online world closed the holiday out at just shy of half a billion dollars in sales. But they were just getting started. And I’ve always thought it was kitschy. And yet I joined in with the rest of them when I started getting all those emails. Because opt-in campaigns were exploding as e-tailers honed those skills at appealing to not wanting to be the worst parent in the world. And Cyber Monday grew year over year. Even as the Great Recession came and has since grown first to a billion dollar shopping day in 2010 and as brick and mortar companies jumped in on the action, $4 billion by 2017, $6 billion in 2018, and nearly $8 billion in 2019. As Covid-19 spread and people stayed home during the 2020 holiday shopping season, revenues from Cyber Monday grew 15% over the previous year, hitting $10.8 billion. But it came at the cost of brick and mortar sales, which fell nearly 24% over the same time a year prior. I guess it kinda’ did, but we’ll get to that in a bit. Seeing the success of the Cyber Monday marketers, American Express launched Small Business Saturday in 2010, hoping to lure shoppers into small businesses that accepted their cards. And who doesn’t love small businesses? Politicians flocked into malls in support, including President Obama in 2011. And by 2012, spending was over $5 billion on Small Business Saturday, and grew to just shy of $20 billion in 2020. To put that into perspective, Georgia, Zimbabwe, Afghanistan, Jamaica, Niger, Armenia, Haiti, Mongolia, and dozens of other countries have smaller GDPs than just one shopping day in the US. Brick and mortar stores are increasingly part of online shopping. Buy online, pick up curb-side. But that trend goes back to the early 2000s when Walmart was a bigger player on Cyber Monday than Amazon. That changed in 2008 and Walmart fought back with Cyber Week, stretching the field in 2009. Target said “us too” in 2010. And everyone in between hopped in. The sales start at least a week early and spread from online to retail in person with hundreds of emails flooding my inbox at this point. This year, Americans are expected to spend over $36 billion during the weekend from Black Friday to Cyber Monday. And the split between all the sales is pretty much indistinguishable. Who knows or to some degrees cares what bucket each gets placed in at this point. Something else was happening in the decades as Black Friday spread to consume the other days around the Thanksgiving holiday: intensifying globalization. Products flooding into the US from all over the world. Some cheap, some better than what is made locally. Some awesome. Some completely unnecessary. It’s a land of plenty. And yet, does it make us happy? My kid enjoyed playing with an empty toilet paper roll just as much as a Furby. And loved the original Xbox just as much as the Switch. I personally need less and to be honest want less as I get older. And yet I still find myself getting roped into spending too much on people at the holidays. Maybe we should create “experience Sunday” where instead of buying material goods, we facilitate free experiences for our loved ones. Because I’m pretty sure they’d rather have that than another ugly pair of holiday socks. Actually, that reminds me: I have some of those in my cart on Amazon so I should wrap this up as they can deliver it tonight if I hurry up. So this Thanksgiving I’m thankful that I and my family are healthy and happy. I’m thankful to be able to do things I love. I’m thankful for my friends. And I’m thankful to all of you for staying with us as we turn another page into the 2022 year. I hope you have a lovely holiday season and have plenty to be thankful for as well. Because you deserve it.
11/26/20219 minutes, 55 seconds
Episode Artwork

An Abridged History of Free And Open Source Software

In the previous episodes, we looked at the rise of patents and software and their impact on the nascent computer industry. But a copyright is a right. And that right can be given to others in whole or in part. We have all benefited from software where the right to copy was waved and it’s shaped the computing industry as much, if not more, than proprietary software. The term Free and Open Source Software (FOSS for short) is a blanket term to describe software that’s free and/or whose source code is distributed for varying degrees of tinkeration. It’s a movement and a choice. Programmers can commercialize our software. But we can also distribute it free of copy protections. And there are about as many licenses as there are opinions about what is unique, types of software, underlying components, etc. But given that many choose to commercialize their work products, how did a movement arise that specifically didn’t? The early computers were custom-built to perform various tasks. Then computers and software were bought as a bundle and organizations could edit the source code. But as operating systems and languages evolved and businesses wanted their own custom logic, a cottage industry for software started to emerge. We see this in every industry - as an innovation becomes more mainstream, the expectations and needs of customers progress at an accelerated rate. That evolution took about 20 years to happen following World War II and by 1969, the software industry had evolved to the point that IBM faced antitrust charges for bundling software with hardware. And after that, the world of software would never be the same. The knock-on effect was that in the 1970s, Bell Labs pushed away from MULTICS and developed Unix, which AT&T then gave away as compiled code to researchers. And so proprietary software was a growing industry, which AT&T began charging for commercial licenses as the bushy hair and sideburns of the 70s were traded for the yuppy culture of the 80s. In the meantime, software had become copyrightable due to the findings of CONTU and the codifying of the Copyright Act of 1976. Bill Gates sent his infamous “Open Letter to Hobbyists” in 1976 as well, defending the right to charge for software in an exploding hobbyist market. And then Apple v Franklin led to the ability to copyright compiled code in 1983. There was a growing divide between those who’d been accustomed to being able to copy software freely and edit source code and those who in an up-market sense just needed supported software that worked - and were willing to pay for it, seeing the benefits that automation was having on the capabilities to scale an organization. And yet there were plenty who considered copyright software immoral. One of the best remembered is Richard Stallman, or RMS for short. Steven Levy described Stallman as “The Last of the True Hackers” in his epic book “Hackers: Heroes of the Computer Revolution.” In the book, he describes the MIT Stallman joined where there weren’t passwords and we didn’t yet pay for software and then goes through the emergence of the LISP language and the divide that formed between Richard Greenblatt, who wanted to keep The Hacker Ethic alive and those who wanted to commercialize LISP. The Hacker Ethic was born from the young MIT students who freely shared information and ideas with one another and help push forward computing in an era they thought was purer in a way, as though it hadn’t yet been commercialized. The schism saw the death of the hacker culture and two projects came out of Stallman’s technical work: emacs, which is a text editor that is still included freely in most modern Unix variants and the GNU project. Here’s the thing, MIT was sitting on patents for things like core memory and thrived in part due to the commercialization or weaponization of the technology they were producing. The industry was maturing and since the days when kings granted patents, maturing technology would be commercialized using that system. And so Stallman’s nostalgia gave us the GNU project, born from an idea that the industry moved faster in the days when information was freely shared and that knowledge was meant to be set free. For example, he wanted the source code for a printer driver so he could fix it and was told it was protected by an NDAQ and so couldn’t have it. A couple of years later he announced GNU, a recursive acronym for GNU’s Not Unix. The next year he built a compiler called GCC and the next year released the GNU Manifesto, launching the Free Software Foundation, often considered the charter of the free and open source software movement. Over the next few years as he worked on GNU, he found emacs had a license, GCC had a license, and the rising tide of free software was all distributed with unique licenses. And so the GNU General Public License was born in 1989 - allowing organizations and individuals to copy, distribute, and modify software covered under the license but with a small change, that if someone modified the source, they had to release that with any binaries they distributed as well. The University of California, Berkley had benefited from a lot of research grants over the years and many of their works could be put into the public domain. They had brought Unix in from Bell Labs in the 70s and Sun cofounder and Java author Bill Joy worked under professor Fabry, who brought Unix in. After working on a Pascal compiler that Unix coauthor Ken Thompson left for Berkeley, Joy and others started working on what would become BSD, not exactly a clone of Unix but with interchangeable parts. They bolted on the OSI model to get networking and through the 80s as Joy left for Sun and DEC got ahold of that source code there were variants and derivatives like FreeBSD, NetBSD, Darwin, and others. The licensing was pretty permissive and simple to understand: Copyright (c) . All rights reserved. Redistribution and use in source and binary forms are permitted provided that the above copyright notice and this paragraph are duplicated in all such forms and that any documentation, advertising materials, and other materials related to such distribution and use acknowledge that the software was developed by the . The name of the may not be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED ``AS IS AND WITHOUT ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. By 1990 the Board of Regents at Berkley accepted a four clause BSD license that spawned a class of licenses. While it’s matured into other formats like a 0 clause license it’s one of my favorites as it is truest to the FOSS cause. And the 90s gave us the Apache License, from the Apache Group, loosely based on the BSD License and then in 2004 leaning away from that with the release of the Apache License 2 that was more compatible with the GPL license. Given the modding nature of Apache they didn’t require derivative works to also be open sourced but did require leaving the license in place for unmodified parts of the original work. GNU never really caught on as an OS in the mainstream, although a collection of tools did. The main reason the OS didn’t go far is probably because Linus Torvalds started releasing prototypes of his Linux operating system in 1991. Torvalds used The GNU General Public License v2, or GPLv2 to license his kernel, having been inspired by a talk given by Stallman. GPL 2 had been released in 1991 and something else was happening as we turned into the 1990s: the Internet. Suddenly the software projects being worked on weren’t just distributed on paper tape or floppy disks; they could be downloaded. The rise of Linux and Apache coincided and so many a web server and site ran that LAMP stack with MySQL and PHP added in there. All open source in varying flavors of what open source was at the time. And collaboration in the industry was at an all-time high. We got the rise of teams of developers who would edit and contribute to projects. One of these was a tool for another aspect of the Internet, email. It was called popclient, Here Eric S Raymond, or ESR for short, picked it up and renamed it to fetchmail, releasing it as an open source project. Raymond presented on his work at the Linux Congress in 1997, expanded that work into an essay and then the essay into “The Cathedral and the Bazaar” where bazaar is meant to be like an open market. That inspired many to open source their own works, including the Netscape team, which resulted in Mozilla and so Firefox - and another book called “Freeing the Source: The Story of Mozilla” from O’Reilly. By then, Tim O’Reilly was a huge proponent of this free or source code available type of software as it was known. And companies like VA Linux were growing fast. And many wanted to congeal around some common themes. So in 1998, Christine Peterson came up with the term “open source” in a meeting with Raymond, Todd Anderson, Larry Augustin, Sam Ockman, and Jon “Maddog” Hall, author of the first book I read on Linux. Free software it may or may not be but open source as a term quickly proliferated throughout the lands. By 1998 there was this funny little company called Tivo that was doing a public beta of a little box with a Linux kernel running on it that bootstrapped a pretty GUI to record TV shows on a hard drive on the box and play them back. You remember when we had to wait for a TV show, right? Or back when some super-fancy VCRs could record a show at a specific time to VHS (but mostly failed for one reason or another)? Well, Tivo meant to fix that. We did an episode on them a couple of years ago but we skipped the term Tivoization and the impact they had on GPL. As the 90s came to a close, VA Linux and Red Hat went through great IPOs, bringing about an era where open source could mean big business. And true to the cause, they shared enough stock with Linus Torvalds to make him a millionaire as well. And IBM pumped a billion dollars into open source, with Sun moving to open source openoffice.org. Now, what really happened there might be that by then Microsoft had become too big for anyone to effectively compete with and so they all tried to pivot around to find a niche, but it still benefited the world and open source in general. By Y2K there was a rapidly growing number of vendors out there putting Linux kernels onto embedded devices. TiVo happened to be one of the most visible. Some in the Linux community felt like they were being taken advantage of because suddenly you had a vendor making changes to the kernel but their changes only worked on their hardware and they blocked users from modifying the software. So The Free Software Foundation updated GPL, bundling in some other minor changes and we got the GNU General Public License (Version 3) in 2006. There was a lot more in GPL 3, given that so many organizations were involved in open source software by then. Here, the full license text and original copyright notice had to be included along with a statement of significant changes and making source code available with binaries. And commercial Unix variants struggled with SGI going bankrupt in 2006 and use of AIX and HP-UX Many of these open source projects flourished because of version control systems and the web. SourceForge was created by VA Software in 1999 and is a free service that can be used to host open source projects. Concurrent Versions System, or CVS had been written by Dick Grune back in 1986 and quickly became a popular way to have multiple developers work on projects, merging diffs of code repositories. That gave way to git in the hearts of many a programmer after Linus Torvalds wrote a new versioning system called git in 2005. GitHub came along in 2008 and was bought by Microsoft in 2018 for 2018. Seeing a need for people to ask questions about coding, Stack Overflow was created by Jeff Atwood and Joel Spolsky in 2008. Now, we could trade projects on one of the versioning tools, get help with projects or find smaller snippets of sample code on Stack Overflow, or even Google random things (and often find answers on Stack Overflow). And so social coding became a large part of many a programmers day. As did dependency management, given how many tools are used to compile a modern web app or app. I often wonder how much of the code in many of our favorite tools is actually original. Another thought is that in an industry dominated by white males, it’s no surprise that we often gloss over previous contributions. It was actually Grace Hopper’s A-2 compiler that was the first software that was released freely with source for all the world to adapt. Sure, you needed a UNIVAC to run it, and so it might fall into the mainframe era and with the emergence of minicomputers we got Digital Equipment’s DECUS for sharing software, leading in part to the PDP-inspired need for source that Stallman was so adamant about. General Motors developed SHARE Operating System for the IBM 701 and made it available through the IBM user group called SHARE. The ARPAnet was free if you could get to it. TeX from Donald Knuth was free. The BASIC distribution from Dartmouth was academic and yet Microsoft sold it for up to $100,000 a license (see Commodore ). So it’s no surprise that people avoided paying upstarts like Microsoft for their software or that it took until the late 70s to get copyright legislation and common law. But Hopper’s contributions were kinda’ like open source v1, the work from RMS to Linux was kinda’ like open source v2, and once the term was coined and we got the rise of a name and more social coding platforms from SourceForge to git, we moved into a third version of the FOSS movement. Today, some tools are free, some are open source, some are free as in beer (as you find in many a gist), some are proprietary. All are valid. Today there are also about as many licenses as there are programmers putting software out there. And here’s the thing, they’re all valid. You see, every creator has the right to restrict the ability to copy their software. After all, it’s their intellectual property. Anyone who chooses to charge for their software is well within their rights. Anyone choosing to eschew commercialization also has that right. And every derivative in between. I wouldn’t judge anyone based on any model those choose. Just as those who distribute proprietary software shouldn’t be judged for retaining their rights to do so. Why not just post things we want to make free? Patents, copyrights, and trademarks are all a part of intellectual property - but as developers of tools we also need to limit our liability as we’re probably not out there buying large errors and omissions insurance policies for every script or project we make freely available. Also, we might want to limit the abuse of our marks. For example, Linus Torvalds monitors the use of the Linux mark through the Linux Mark Institute. Apparently some William Dell Croce Jr tried to register the Linux trademark in 1995 and Torvalds had to sue to get it back. He provides use of the mark using a free and perpetual global sublicense. Given that his wife won the Finnish karate championship six times I wouldn’t be messing with his trademarks. Thank you to all the creators out there. Thank you for your contributions. And thank you for tuning in to this episode of the History of Computing Podcast. Have a great day.
11/24/202122 minutes, 34 seconds
Episode Artwork

Perl, Larry Wall, and Camels

Perl was started by Larry Wall in 1987. Unisys had just released the 2200 series and only a few years stopped using the name UNIVAC for any of their mainframes. They merged with Burroughs the year before to form Unisys. The 2200 was a continuation of the 36-bit UNIVAC 1107, which went all the way back to 1962. Wall was one of the 100,000 employees that helped bring in over 10 and a half billion in revenues, making Unisys the second largest computing company in the world at the time. They merged just in time for the mainframe market to start contracting. Wall had grown up in LA and Washington and went to grad school at the University of California at Berkeley. He went to the Jet Propulsion Laboratory after Grad School and then landed at System Development Corporation, which had spun out of the SAGE missile air defense system in 1955 and merged into Burroughs in 1986, becoming Unisys Defense Systems. The Cold War had been good to Burroughs after SDC built the timesharing components of the AN/FSQ-32 and the JOVIAL programming language. But changes were coming. Unix System V had been released in 1983 and by 1986 there was a rivalry with BSD, which had been spun out of UC - Berkeley where Wall went to school. And by then AT&T had built up the Unix System Development Laboratory, so Unix was no longer just a language for academics. Wall had some complicated text manipulation to program on these new Unix system and as many of us have run into, when we exceed a certain amount of code, awk becomes unwieldy - both from a sheer amount of impossible to read code and from a runtime perspective. Others were running into the same thing and so he got started on a new language he named Practical Extraction And Report Language, or Perl for short. Or maybe it stands for Pathologically Eclectic Rubbish Lister. Only Wall could know. The rise of personal computers gave way to the rise of newsgroups, and NNTP went to the IETF to become an Internet Draft in RFC 977. People were posting tools to this new medium and Wall posted his little Perl project to comp.sources.unix in 1988, quickly iterating to Perl 2 where he added the languages form of regular expressions. This is when Perl became one of the best programming languages for text processing and regular expressions available at the time. Another quick iteration came when more and more people were trying to write arbitrary data into objects with the rise of byte-oriented binary streams. This allowed us to not only read data from text streams, terminated by newline characters, but to read and write with any old characters we wanted to. And so the era of socket-based client-server technologies was upon us. And yet, Perl would become even more influential in the next wave of technology as it matured alongside the web. In the meantime, adoption was increasing and the only real resource to learn Perl was a the manual, or man, page. So Wall worked with Randal Schwartz to write Programming Perl for O’Reilly press in 1991. O’Reilly has always put animals on the front of their books and this one came with a Camel on it. And so it became known as “the pink camel” due to the fact that the art was pink and later the art was blue and so became just “the Camel book”. The book became the primary reference for Perl programmers and by then the web was on the rise. Yet perl was still more of a programming language for text manipulation. And yet most of what we did as programmers at the time was text manipulation. Linux came around in 1991 as well. Those working on these projects probably had no clue what kind of storm was coming with the web, written in 1990, Linux, written in 1991, php in 1994, and mysql written in 1995. It was an era of new languages to support new ways of programming. But this is about Perl - whose fate is somewhat intertwined. Perl 4 came in 1993. It was modular, so you could pull in external libraries of code. And so CPAN came along that year as well. It’s a repository of modules written in Perl and then dropped into a location on a file system that was set at the time perl was compiled, like /usr/lib/perl5. CPAN covers far more libraries than just perl, but there are now over a quarter million packages available, with mirrors on every continent except Antartica. That second edition coincided with the release of Perl 5 and was published in 1996. The changes to the language had slowed down for a bit, but Perl 5 saw the addition of packages, objects, references, and the authors added Tom Christiansen to help with the ever-growing camel book. Perl 5 also brought the extension system we think of today - somewhat based off the module system in Linux. That meant we could load the base perl into memory and call those extensions. Meanwhile, the web had been on the rise and one aspect of the power of the web was that while there were front-ends that were stateless, cookies had come along to maintain a user state. Given the variety of systems html was able to talk to mod_perl came along in 1996, from Gisle Was and others started working on ways to embed perl into pages. Ken Coar chaired a working group in 1997 to formalize the concept of the Common Gateway Interface. Here, we’d have a common way to call external programs from web servers. The era of web interactivity was upon us. Pages that were constructed on the fly could call scripts. And much of what was being done was text manipulation. One of the powerful aspects of Perl was that you didn’t have to compile. It was interpreted and yet dynamic. This meant a source control system could push changes to a site without uploading a new jar - as had to be done with a language like Java. And yet, object-oriented programming is weird in perl. We bless an object and then invoke them with arrow syntax, which is how Perl locates subroutines. That got fixed in Perl 6, but maybe 20 years too late to use a dot notation as is the case in Java and Python. Perl 5.6 was released in 2000 and the team rewrote the camel book from the ground up in the 3rd edition, adding Jon Orwant to the team. This is also when they began the design process for Perl 6. By then the web was huge and those mod_perl servlets or CGI scripts were, along with PHP and other ways of developing interactive sites, becoming common. And because of CGI, we didn’t have to give the web server daemons access to too many local resources and could swap languages in and out. There are more modern ways now, but nearly every site needed CGI enabled back then. Perl wasn’t just used in web programming. I’ve piped a lot of shell scripts out to perl over the years and used perl to do complicated regular expressions. Linux, Mac OS X, and other variants that followed Unix System V supported using perl in scripting and as an interpreter for stand-alone scripts. But I do that less and less these days as well. The rapid rise of the web mean that a lot of languages slowed in their development. There was too much going on, too much code being developed, too few developers to work on the open source or open standards for a project like Perl. Or is it that Python came along and represented a different approach with modules in python created to do much of what Perl had done before? Perl saw small slow changes. Python moved much more quickly. More modules came faster, and object-oriented programming techniques hadn’t been retrofitted into the language. As the 2010s came to a close, machine learning was on the rise and many more modules were being developed for Python than for Perl. Either way, the fourth edition of the Camel Book came in 2012, when Unicode and multi-threading was added to Perl. Now with Brian Foy as a co-author. And yet, Perl 6 sat in a “it’s coming so soon” or “it’s right around the corner” or “it’s imminent” for over a decade. Then 2019 saw Perl 6 finally released. It was renamed to Raku - given how big a change was involved. They’d opened up requests for comments all the way back in 2000. The aim was to remove what they considered historical warts, that the rest of us might call technical debt. Rather than a camel, they gave it a mascot called Camelia, the Raku Bug. Thing is, Perl had a solid 10% market share for languages around 20 years ago. It was a niche langue maybe, but that popularity has slowly fizzled out and appears to be on a short resurgence with the introduction of 6 - but one that might just be temporary. One aspect I’ve always loved about programming is the second we’re done with anything, we think of it as technical debt. Maybe the language or server matures. Maybe the business logic matures. Maybe it’s just our own skills. This means we’re always rebuilding little pieces of our code - constantly refining as we go. If we’re looking at Perl 6 today we have to look at whether we want to try and do something in Python 3 or another language - or try and just update Perl. If Perl isn’t being used in very many micro-services then given the compliance requirements to use each tool in our stack, it becomes somewhat costly to think of improving our craft with Perl rather than looking to use possibly more expensive solutions at runtime, but less expensive to maintain. I hope Perl 6 grows and thrives and is everything we wanted it to be back in the early 2000s. It helped so much in an era and we owe the team that built it and all those modules so much. I’ll certainly be watching adoption with fingers crossed that it doesn’t fade away. Especially since I still have a few perl-based lamda functions out there that I’d have to rewrite. And I’d like to keep using Perl for them!
11/21/202115 minutes
Episode Artwork

The Von Neumann Architecture

John Von Neumann was born in Hungary at the tail end of the Astro-Hungarian Empire. The family was made a part of the nobility and as a young prodigy in Budapest, He learned languages and by 8 years old was doing calculus. By 17 he was writing papers on polynomials. He wrote his dissertation in 1925 he added to set theory with the axiom of foundation and the notion of class, or properties shared by members of a set. He worked on the minimax theorem in 1928, the proof of which established zero-sum games and started another discipline within math, game theory. By 1929 he published the axiom system that led to Von Neumann–Bernays–Gödel set theory. And by 1932 he’d developed foundational work on ergodic theory which would evolve into a branch of math that looks at the states of dynamical systems, where functions can describe a points time dependence in space. And so he of course penned a book on quantum mechanics the same year. Did we mention he was smart and given the way his brain worked it made sense that he would eventually gravitate into computing. He went to the best schools with other brilliant scholars who would go on to be called the Martians. They were all researching new areas that required more and more computing - then still done by hand or a combination of hand and mechanical calculators. The Martians included De Hevesy, who won a Nobel prize for Chemistry. Von Kármán got the National Medal of Science and a Franklin Award. Polanyl developed the theory of knowledge and the philosophy of science. Paul Erdős was a brilliant mathematician who published over 1,500 articles. Edward Teller is known as the father of the hydrogen bomb, working on nuclear energy throughout his life and lobbying for the Strategic Defense Initiative, or Star Wars. Dennis Gabor wrote Inventing the Future and won a Nobel Prize in Physics. Eugene Wigner also took home a Nobel Prize in Physics and a National Medal of Science. Leo Szilard took home an Albert Einstein award for his work on nuclear chain reactions and joined in the Manhattan Project as a patent holder for a nuclear reactor. Physicists and brilliant scientists. And here’s a key component to the explosion in science following World War II: many of them fled to the United States and other western powers because they were Jewish, to get away from the Nazis, or to avoid communists controlling science. And then there was Harsanyl, Halmos, Goldmark, Franz Alexander, Orowan, and John Kemeny who gave us BASIC. They all contributed to the world we live in today - but von Neumann sometimes hid how smart he was, preferring to not show just how much arithmetic computed through his head. He was married twice and loved fast cars, fine food, bad jokes, and was an engaging and enigmatic figure. He studied measure theory and broke dimension theory into algebraic operators. He studied topological groups, operator algebra, spectral theory, functional analysis and abstract Hilbert space. Geometry and Lattice theory. As with other great thinkers, some of his work has stood the test of time and some has had gaps filled with other theories. And then came the Manhattan project. Here, he helped develop explosive lenses - a key component to the nuclear bomb. Along the way he worked on economics and fluid mechanics. And of course, he theorized and worked out the engineering principals for really big explosions. He was a commissioner of the Atomic Energy Commission and at the height of the Cold War after working out game theory, developed the concept of mutually assured destruction - giving the world hydrogen bombs and ICBMs and reducing the missile gap. Hard to imagine but at the times the Soviets actually had a technical lead over the US, which was proven true when they launched Sputnik. As with the other Martians, he fought Communism and Fasciscm until his death - which won him a Medal of Freedom from then president Eisenhower. His friend Stanislaw Ulam developed the modern Markov Chain Monte Carlo method and Von Neumann got involved in computing to work out those calculations. This combined with where his research lay landed him as an early power user of ENIAC. He actually heard about the machine at a station while waiting for a train. He’d just gotten home from England and while we will never know if he knew of the work Turing was doing on Colossus at Bletchley Park, we do know that he offered Turing a job at the Institute for Advanced Study that he was running in Princeton before World War II and had read Turing’s papers, including “On Computable Numbers” and understood the basic concepts of stored programs - and breaking down the logic into zeros and ones. He discussed using ENIAC to compute over 333 calculations per second. He could do a lot in his head, but he wasn’t that good of a computer. His input was taken and when Eckert and Mauchly went from ENIAC to EDVAC, or the Electronic Discrete Variable Calculator, the findings were published in a paper called “First Draft of a Report on the EDVAC” - a foundational paper in computing for a number of reasons. One is that Mauchly and Eckert had an entrepreneurial spirit and felt that not only should their names have been on the paper but that it was probably premature and so they quickly filed a patent in 1945, even though some of what they told him that went into the paper helped to invalidate the patent later. They considered these trade secrets and didn’t share in von Neumann’s idea that information must be set free. In the paper lies an important contribution, Von Neumann broke down the parts of a modern computer. He set the information for how these would work free. He broke down the logical blocks of how a computer works into the modern era. How once we strip away the electromechanical computers that a fully digital machine works. Inputs go into a Central Processing Unit, which has an instruction register, a clock to keep operations and data flow in sync, and a counter - it does the math. It then uses quick-access memory, which we’d call Random Access Memory, or RAM today, to make processing data instructions faster. And it would use long-term memory for operations that didn’t need to be as highly available to the CPU. This should sound like a pretty familiar way to architect devices at this point. The result would be sent to an output device. Think of a modern Swift app for an iPhone - the whole of what the computer did could be moved into a single wafer once humanity worked out how first transistors and then multiple transistors on a single chip worked. Yet another outcome of the paper was to inspire Turing and others to work on computers after the war. Turing named his ACE or Automatic Computing Engine out of respect to Charles Babbage. That led to the addition of storage to computers. After all, punched tape was used for Colossus during the war and and punched cards and tape had been around for awhile. It’s ironic that we think of memory as ephemeral data storage and storage as more long-term storage. But that’s likely more to do with the order these scientific papers came out than anything - and homage to the impact each had. He’d write The Computer and the Brain, Mathematical Foundations of Quantum Mechanics, The Theory of Games and Economic Behavior, Continuous Geometry, and other books. He also studied DNA and cognition and weather systems, inferring we could predict the results of climate change and possibly even turn back global warming - which by 1950 when he was working on it was already acknowledged by scientists. As with many of the early researchers in nuclear physics, he died of cancer - invoking Pascal’s wager on his deathbed. He died in 1957 - just a few years too early to get a Nobel Prize in one of any number of fields. One of my favorite aspects of Von Neumann was that he was a lifelong lover of history. He was a hacker - bouncing around between subjects. And he believed in human freedom. So much so that this wealthy and charismatic pseudo-aristocrat would dedicate his life to the study of knowledge and public service. So thank you for the Von Neumann Architecture and breaking computing down into ways that it couldn’t be wholesale patented too early to gain wide adoption. And thank you for helping keep the mutually assured destruction from happening and for inspiring generations of scientists in so many fields. I’m stoked to be alive and not some pile of nuclear dust. And to be gainfully employed in computing. He had a considerable impact in both.
11/12/202112 minutes, 24 seconds
Episode Artwork

Getting Fit With Fitbit

Fitbit was founded in 2007, originally as Healthy Metrics Research, Inc, by James Park and Eric Friedman. They had a goal to bring fitness trackers to market. They didn’t invent the pedometer and in fact wanted to go far further. That prize goes to Abraham-Louis Perrelet of Switzerland in 1780 or possibly back to da Vinci. And there are stories of calculating the distance armies moved using various mechanisms that used automations based on steps or the spinning of wagon wheels. The era of wearables arguably began in 1953 when the transistor radio showed up and Akio Morita and Masaru Ibuka started Sony. People started to get accustomed to carrying around technology. 1961 and Claude Shannon and Edward Thorp build a small computer to time when balls would land in roulette. Which they put in a shoe. Meanwhile sensors that could detect motion and the other chips to essentially create a small computer in a watch-sized package were coming down in price. Apple had already released the Nike+iPod Sports Kit the year before, with a little sensor that went in my running shoes. And Fitbit capitalized on an exploding market for tracking fitness. Apple effectively proved the concept was ready for higher end customers. But remember that while the iPod was incredibly popular at the time, what about everyone else? Park and Friedman raised $400,000 on the idea in a pre-seed round and built a prototype. No, it wasn’t actually a wearable, it was a bunch of sensors in a wooden box. That enabled them to shop around for more investors to actually finish a marketable device. By 2008 they were ready to take the idea to TechCrunch 50 and Tim O’Reilly and other panelists from TechCrunch loved it. And they picked up a whopping 2,000 pre-release orders. Only problem is they weren’t exactly ready to take that kind of volume. So they toured suppliers around Asia for months and worked overtime in hotel rooms fixing design and architecture issues. And in 2009 they were finally ready and took 25,000 orders, shipping about one fifth of them. That device was called the Fitbit Tracker and took on a goal of 10,000 steps that became a popular goal in Japan in the 1960s. It’s a little money-clip sized device with just one button that shows the status towards that 10,000 step goal. And once synchronized we could not only see tons of information about how many calories we burned and other statistics but we could also see Those first orders were sold directly through the web site. The next batch would be much different, going through Best Buy. The margins selling directly were much better and so they needed to tune those production lines. They went to four stores, then ten times that, then 15 times that. They announced the Fitbit Ultra in 2011. Here we got a screen that showed a clock but also came with a stopwatch. That would evolve into the Fitbit One in 2012. Bluetooth now allowed us to sync with our phones. That original device would over time evolve to the Zip and then the Inspire Clip. They grew fast in those first few years and enjoyed a large swathe of the market initially, but any time one vendor proves a market others are quick to fast-follow. The Nike Fuelband came along in 2012. There were also dozens of cheap $15 knock-offs in stores like Fry’s. But those didn’t have nearly as awesome an experience. A simple experience was the Fitbit Flex, released in 2013. The Fitbit could now be worn on the wrist. It looked more like the original tracker but a little smaller so it could slide in and out of a wristband. It could vibrate so could wake us up and remind us to get up and move. And the Fitbit Force came out that year, which could scroll through information on the screen, like our current step count. But that got some bad press for the nickel used on the device so the Charge came out the next year, doing much of the same stuff. And here we see the price slowly going up from below a hundred dollars to $130 as new models with better accelerometers came along. In 2014 they released a mobile app for all the major mobile platforms that allowed us to track devices through Bluetooth and opened up a ton of options to show other people our information. Chuck Schumer was concerned about privacy but the options for fitness tracking were about to explode in the other direction, becoming even less private. That’s the same year the LG G Watch came out, sporting a Qualcomm Snapdragon chip. The ocean was getting redder and devices were becoming more like miniature computers that happened to do tracking as well. After Android Wear was released in 2014, now called Wear OS, the ocean was bound to get much, much redder. And yet, they continued to grow and thrive. They did an IPO, or Initial Public Offering, in 2015 on the back of selling over 21 million devices. They were ready to reach a larger market. Devices were now in stores like Walmart and Target, and they had badges. It was an era of gamification and they were one of the best in the market at that. Walk enough steps to have circumnavigated the sun? There’s a badge for that. Walk the distance of the Nile? There’s a badge for that. Do a round trip to the moon and back? Yup, there’s a badge for that as well. And we could add friends in the app. Now we could compete to see who got more steps on the day. And of course some people cheated. Once I was wearing a Fitbit on my wrist I got 60,000 steps one day as I painted the kitchen. So we sometimes didn’t even mean to cheat. And an ecosystem had sprung up around Fitbit. Like Fitstar, a personal training coach, which got acquired by Fitbit and rebranded as Fitbit Coach. 2015 was also when the Apple Watch was released. The Apple Watch added many of the same features like badges and similar statistics. By then there were models of the Fitbit that could show who was calling our phone or display a text message we got. And that was certainly part of the Wear OS for of Android. But those other devices were more expensive and Fitbit was still able to own the less expensive part of the market and spend on R&D to still compete at the higher end. They were flush with cash by 2016 so while selling 22 million more devices, they bought Coin and Pebble that year, taking in technology developed through crowdfunding sources and helping mass market it. That’s the same year we got the Fitbit Alta, effectively merging the Charge and Alta and we got HR models of some devices, which stands for Heart Rate. Yup, they could now track that too. They bought Vector Watch SRL in 2017, the same year they released the Ionic smartwatch, based somewhat on the technology acquired from Pebble. But the stock took a nosedive, and the market capitalization was cut in half. They added weather to the Ionic and merged that tech with that from the Blaze, released the year before. Here, we see technology changing quickly - Pebble was merged with Blaze but Wear OS from Google and Watch OS from Apple were forcing changes all the faster. The apps on other platforms were a clear gap as were the sensors baked into so many different integrated circuit packages. But Fitbit could still compete. In 2018 they released a cheaper version of the smartwatch called the Versa. They also released an API that allowed for a considerable amount of third party development, as well as Fitbit OS 3. They also bought Twine Health in 2018 Partnered with Adidas in 2018 for the ionic. Partnered with Blue Cross Blue Shield to reduce insurance rates 2018 released the Charge 3 with oxygen saturation sensors and a 40% larger screen than the Charge 2. From there the products got even more difficult to keep track of, as they poked at every different corner of the market. The Inspire, Inspire HR, Versa 2, Versa Lite, Charge 4, Versa 3, Sense, Inspire 2, Luxe. I wasn’t sure if they were going to figure out the killer device or not when Fitbit was acquired by Google in 2021. And that’s where their story ends and the story of the ubiquitous ecosystem of Google begins. Maybe they continue with their own kernels or maybe they’re moving all of their devices to WearOS. Maybe Google figures out how to pull together all of their home automation and personal tracking devices into one compelling offer. Now they get to compete with Amazon who now has the Halo to help attack the bottom of the market. Or maybe Google leaves the Fitbit team alone to do what they do. Fitbit has sold over 100 million devices and sports well over 25 million active users. The Apple Watch surpassed that number and blew right past it. WearOS lives in a much more distributed environment where companies like Asus, Samsung, and LG sell products but it appears to have a similar installation base. And it’s a market still growing and likely looking for a leader, as it’s easy to imagine a day when most people have a smart watch. But the world has certainly changed since Mark Weiser was the Chief Technologist at the famed Xerox Palo Alto Research Center, or Xerox Parc in 1988 when he coined the term "ubiquitous computing.” Technology hadn’t entered every aspect of our lives at the time like it has now. The team at Fitbit didn’t invent wearables. George Atwood invented them in 1783. That was mostly pulleys and mechanics. Per V. Brüel first commercialized the piezoelectric accelerometer in 1943. It certainly took a long time to get packaged into an integrated circuit and from there it took plenty of time to end up on my belt loop. But from there it took less than a few years to go on my wrist and then once there were apps for all the things true innovation came way faster. Because it turns out that once we open up a bunch of APIs, we have no idea the amazing things people use with what then go from devices to platforms. But none of that would have happened had Fitbit not helped prove the market was ready for Weiser’s ubiquitous computing. And now we get to wrestle with the fallout while innovation is moving even faster. Because telemetry is the opposite of privacy. And if we forget to protect just one of those API endpoints, like not implementing rate throttling or messing up the permissions, or leaving a micro-service open to all the things, we can certainly end up telling the world all about things. Because the world is watching, whether we think we’re important enough to watch or not.
11/5/202116 minutes, 18 seconds
Episode Artwork

Our Friend, The Commodore Amiga

Jay Miner was born in 1932 in Arizona. He got his Bachelor of Science at the University of California at Berkeley and helped design calculators that used the fancy new MOS chips where he cut his teeth doing microprocessor design, which put him working on the MOS 6500 series chips. Atari decided to use those in the VCS gaming console and so he ended up going to work for Atari. Things were fine under Bushnell but once he was off to do Chuck E Cheese and Time-Warner was running Atari things started to change. There he worked on chip designs that would go into the Atari 400 and 800 computers, which were finally released in 1979. But by then, Miner was gone after he couldn’t get in step with the direction Atari was taking. So he floated around for a hot minute doing chip design for other companies until Larry Kaplan called. Kaplan had been at Atari and founded Activision in 1979. He had half a dozen games under his belt by then, but was ready for something different by 1982. He and Doug Neubauer saw the Nintendo NES was still using the MOS 6502 core, although now a Ricoh 2A03. They knew they could do better. Miner’s company didn’t want in on it, so they struck out on their own. Together they started a company called Hi-Toro, which they quickly renamed to Amiga. They originally wanted to build a new game console based on the Motorola 68000 chips, which were falling in price. They’d seen what Apple could do with the MOS 6502 chips and what Tandy did with the Z-80. These new chips were faster and had more options. Everyone knew Apple was working on the Lisa using the chips and they were slowly coming down in price. They pulled in $6 million in funding and started to build a game console, codenamed Lorraine. But to get cash flow, they worked on joysticks and various input devices for other gaming platforms. But development was expensive and they were burning through cash. So they went to Atari and signed a contract to give them exclusive access to the chips they were creating. And of course, then came the video game crash of 1983. Amazing timing. That created a shakeup around the industry. Jack Tramiel was out at Commodore, the company he founded originally to create calculators at the dawn of MOS chip technology. And Tramiel bought Atari from Time Warner. The console they were supposed to give Atari wasn’t done yet. Meanwhile Tramiel had cut most of the Atari team and was bringing in his trusted people from Commodore, so seeing they’d have to contend with a titan like Tramiel, the team at Amiga went looking for investors. That’s when Commodore bought Amiga to become their new technical team and next thing you know, Tramiel sues Commodore and that drags on from 1983 to 1987. Meanwhile, the nerds worked away. And by CES of 1984 they were able to show off the power of the graphics with a complex animation of a ball spinning and bouncing and shadows rendered on the ball. Even if the OS wasn’t quite done yet, there was a buzz. By 1985, they announced The Amiga from Commodore - what we now know as the Amiga 1000. The computer was prone to crash, they had very little marketing behind them, but they were getting sales into the high thousands per month. Not only was Amiga competing with the rest of the computer industry, but they were competing with the PET and VIC-20, which Commodore was still selling. So they finally killed off those lines and created a strategy where they would produce a high end machine and a low end machine. These would become the Amiga 2000 and 500. Then the Amiga 3000 and 500 Plus, and finally the 4000 and 1200 lines. The original chips evolved into the ECS then AGA chipsets but after selling nearly 5,000,000 machines, they just couldn’t keep up with missteps from Commodore after Irving Gould outside yet another CEO. But those Amiga machines. They were powerful and some of the first machines that could truly crunch the graphics and audio. And those higher end markets responded with tooling built specifically for the Amiga. Artists like Andy Warhol flocked to the platform. We got LightWave used on shows like Max Headroom. I can still remember that Money For Nothing video from Dire Straits. And who could forget Dev. The graphics might not have aged well but they were cutting edge at the time. When I toured colleges in that era, nearly every art department had a lab of Amigas doing amazing things. And while artists like Calvin Harris might have started out on an Amiga, many slowly moved to the Mac over the ensuing years. Commodore had emerged from a race to the bottom in price and bought themselves a few years in the wake of Jack Tramiel’s exit. But the platform wars were raging with Microsoft DOS and then Windows rising out of the ashes of the IBM PC and IBM-compatible clone makers were standardizing. Yet Amiga stuck with the Motorola chips, even as Apple was first in line to buy them from the assembly line. Amiga had designed many of their own chips and couldn’t compete with the clone makers at the lower end of the market or the Mac at the higher end. Nor the specialty systems running variants of Unix that were also on the rise. And while the platform had promised to sell a lot of games, the sales were a fourth or less of the other platforms and so game makers slowly stopped porting to the Amiga. They even tried to build early set-top machines, with the CDTV model, which they thought would help them merge the coming set-top television control and the game market using CD-based games. They saw MPEG coming but just couldn’t cash in on the market. We were entering into an era of computing where it was becoming clear that the platform that could attract the most software titles would be the most popular, despite the great chipsets. The operating system had started slow. Amiga had a preemptive multitasking kernel and the first version looked like a DOS windowing screen when it showed up iii 1985. Unlike the Mac or Windows 1 it had a blue background with oranges interspersed. It wasn’t awesome but it did the trick for a bit. But Workbench 2 was released for the Amiga 3000. They didn’t have a lot of APIs so developers were often having to write their own tools where other operating systems gave them APIs. It was far more object-oriented than many of its competitors at the time though, and even gave support for multiple languages and hypertext schemes and browsers. Workbench 3 came in 1992, along with the A4000. There were some spiffy updates but by then there were less and less people working on the project. And the tech debt piled up. Like a lack of memory protection in the Exec kernel meant any old task could crash the operating system. By then, Miner was long gone. He again clashed with management at the company he founded, which had been purchased. Without the technical geniuses around, as happens with many companies when the founders move on, they seemed almost listless. They famously only built features people asked for. Unlike Apple, who guided the industry. Miner passed away in 1994. Less than two years later, Commodore went bankrupt in 1996. The Amiga brand was bought and sold to a number of organizations but nothing more ever became of them. Having defeated Amiga, the Tramiel family sold off Atari in 1996 as well. The age of game consoles by American firms would be over until Microsoft released the Xbox in 2001. IBM had pivoted out of computers and the web, which had been created in 1989 was on the way in full force by then. The era of hacking computers together was officially over.
10/28/202113 minutes, 32 seconds
Episode Artwork

All About Amdahl

Gene Amdahl grew up in South Dakota and as with many during the early days of computing went into the Navy during World War II. He got his degree from South Dakota State in 1948 and went on to the University of Wisconsin-Madison for his PhD, where he got the bug for computers in 1952, joining the ranks of IBM that year. At IBM he worked on the iconic 704 and then the 7030 but found it too bureaucratic. And yet he came back to become the Chief Architect of the IBM S/360 project. They pushed the boundaries of what was possible with transistorized computing and along the way, Amdahl gave us Amdahl’s Law, which is an important aspect of parallel computing - how much latency tasks take when split across different CPUs. Think of it like the law of diminishing returns applied to processing. Contrast this with Fred Brook’s Brook’s Law - which says that adding incremental engineers don’t make projects happen faster by the same increment, or that it can cause a project to take even more time. As with Seymour Cray, Amdahl had ideas for supercomputers and left IBM again in 1970 when they didn’t want to pursue them - ironically just a few years after Thomas Watson Jr admitted that just 34 people at CDC had kicked IBM out of their leadership position in the market. First he needed to be able to build a computer, then move into supercomputers. Fully transistorized computing had somewhat cleared the playing field. So he developed the Amdahl 470V/6 - more reliable, more pluggable, and so cheaper than the IBM S/370. He also used virtual machine technology so customers could simulate a 370 and so run existing workloads cheaper. The first went to NASA and the second to the University of Michigan. During the rise of transistorized computing they just kept selling more and more machines. The company grew fast, taking nearly a quart of the market share. As we saw in the CDC episode, the IBM antitrust case was again giving a boon to other companies. Amdahl was able to leverage the fact that IBM software was getting unbundled with the hardware as a big growth hack. As with Cray at the time, Amdahl wanted to keep to one CPU per workload and developed chips and electronics with Fujitsu to enable doing so. By the end of the 70s they had grown to 6,000 employees on the back of a billion dollars in sales. And having built a bureaucratic organization like the one he just left, he left his namesake company much as Seymour Cray had left CDC after helping build it (and would later leave Cray to start yet another Cray). That would be Trilogy systems, which failed shortly after an IPO. I guess we can’t always bet on the name. Then Andor International. Then Commercial Data Servers, now a part of Xbridge systems. Meanwhile the 1980s weren’t kind to the company with his name on the masthead. The rise of Unix and first minicomputers then standard servers meant people were building all kinds of new devices. Amdahl started selling servers, given the new smaller and pluggable form factors. They sold storage. They sold software to make software, like IDEs. The rapid proliferation of networking and open standards let them sell networking products. Fujitsu ended up growing faster and when Gene Amdahl was gone, in the face of mounting competition with IBM, Amdahl tried to merge with Storage Technology Corporation, or StorageTek as it might be considered today. CDC had pushed some of its technology to StorageTek during their demise and StorageTek in the face of this new competition ended up filing Chapter 11 and getting picked up by Sun for just over $4 billion. But Amdahl was hemorrhaging money as we moved into the 90s. They sold off half the shares to Fujitsu, laid off over a third of their now 10,000 plus workforce, and by the year 2000 had been lapped by IBM on the high end market. They sold off their software division, and Fujitsu acquired the rest of the shares. Many of the customers then moved to the then-new IBM Z series servers that were coming out with 64 bit G3 and G4 chips. As opposed to the 31-bit chips Amdahl, now Fujitsu under the GlobalServer mainframe brand, sells. Amdahl came out of the blue, or Big Blue. On the back of Gene Amdahl’s name and a good strategy to attack that S/360 market, they took 8% of the mainframe market from IBM at one point. But they sold to big customers and eventually disappeared as the market shifted to smaller machines and a more standardized lineup of chips. They were able to last for awhile on the revenues they’d put together but ultimately without someone at the top with a vision for the future of the industry, they just couldn’t make it as a standalone company. The High Performance Computing server revenues steadily continue to rise at Fujitsu though - hitting $1.3 billion in 2020. In fact, in a sign of the times, the 20 million Euro PRIMEHPC FX700 that’s going to the Minho Advanced Computing Centre in Portugal is a petascale computer built on an ARM plus x86 architecture. My how the times have changed. But as components get smaller, more precise, faster, and more mass producible we see the same types of issues with companies being too large to pivot quickly from the PC to the post-PC era. Although at this point, it’s doubtful they’ll have a generations worth of runway from a patron like Fujitsu to be able to continue in business. Or maybe a patron who sees the benefits downmarket from the new technology that emerges from projects like this and takes on what amounts to nation-building to pivot a company like that. Only time will tell.
10/24/20218 minutes, 47 seconds
Episode Artwork

The Dartmouth Time Sharing System and Time Sharing

DTSS, or The Dartmouth Time Sharing System, began at Dartmouth College in 1963. That was the same year Project MAC started at MIT, which is where we got Multics, which inspired Unix. Both contributed in their own way to the rise of the Time Sharing movement, an era in computing when people logged into computers over teletype devices and ran computing tasks - treating the large mainframes of the era like a utility. The notion had been kicking around in 1959 but then John McCarthy at MIT started a project on an IBM 704 mainframe. And PLATO was doing something similar over at the University of Illinois, Champaign-Urbana. 1959 is also when John Kemeny and Thomas Kurtz at Dartmouth College bought Librascope General Purpose computer, then being made in partnership with the Royal Typewriter Company and Librascope - whichwould later be sold off to Lockheed Martin. Librascope had Stan Frankel - who had worked on both the Manhattan Project and the ENIAC. And he architected the LGP-30 in 1956, which ended up at Dartmouth. At this point, the computer looked like a desk with a built-in typewriter. Kurtz had four students that were trying to program in ALGOL 58. And they ended up writing a language called DOPE in the early 60s. But they wanted everyone on campus to have access to computing - and John McCarthy said why not try this new time sharing concept. So they went to the National Science Foundation and got funding for a new computer, which to the chagrin of the local IBM salesman, ended up being a GE-225. This baby was transistorized. It sported 10,0000 transistors and double that number of diodes. It could do floating-point arithmetic, used a 20-bit word, and came with 186,000 magnetic cores for memory. It was so space aged that one of the developers, Arnold Spielberg, would father one of the greatest film directors of all time. Likely straight out of those diodes. Dartmouth also picked up a front-end processor called a DATANET-30 from GE. This only had an 18-bit word size but could do 4k to 16k words and supported hooking up 128 terminals that could transfer data to and from the system at 3,000 bits a second using the Bell 103 modem. Security wasn’t a thing yet, so these things had direct memory access to the 225, which was a 235 by the time they received the computer. They got to work in 1963, installing the equipment and writing the code. The DATANET-30 received commands from the terminals and routed them to the mainframe. They scanned for commands 110 times per second from the terminals and ran them when the return key was pressed on a terminal. If the return key was a command they queued it up to run, taking into account routine tasks the computer might be doing in the background. Keep in mind, the actual CPU was only doing one task at a time, but it seemed like it was multi-tasking! Another aspect of democratizing computing across campus was to write a language that was more approachable than a language like Algol. And so they released BASIC in 1964, picking up where DOPE left off, and picking up a more marketable name. Here we saw a dozen undergraduates develop a language that was as approachable as the name implies. Some of the students went to Phoenix, where the GE computers were built. And the powers at GE saw the future. After seeing what Dartmouth had done, GE ended up packaging the DATANET-30 and GE-235 as one machine, which they marketed as the GE-265 the next year. And here we got the first commercially viable time-sharing system, which started a movement. One so successful that GE decided to get out of making computers and focus instead on selling access to time sharing systems. By 1968 they actually ended up shooting up to 40% of the market of the day. Dartmouth picked up a GE Mark II in 1966 and got to work on DTSS version 2. Here, they added some of the concepts coming out of the Multics project that was part of Project MAC at MIT and built on previous experiences. They added pipes and communication files to promote inter-process communications - thus getting closer to the multiple user conferencing like what was being done on PLATO with Notes. Things got more efficient and they could handle more and more concurrent sessions. This is when they went from just wanting to offer computing as a basic right on campus to opening up to schools in the area. Nearby Hanover High School started first and by 1967 they had over a dozen. Using further grants from NSF they added another dozen schools to what by then they were calling the Kiewit Network. Then added other smaller colleges and by 1971 supported a whopping 30,000 users. And by 73 supported leased line connections all the way to Ohio, Michigan, New York, and even Montreal. The system continued on in one form or another, allowing students to code in FORTRAN, COBOL, LISP, and yes… BASIC. It became less of a thing as Personal Computers started to show up here and there. But BASIC didn’t. Every computer needed a BASIC. But people still liked to connect on the system and share information. At least, until the project was finally shut down in 1999. Turns out we didn’t need time sharing once the Internet came along. Following the early work done by pioneers, companies like Tymshare and CompuServe were born. Tymshare came out of two of the GE team, Thomas O’Rourke and David Schmidt. They ran on SDS hardware and by 1970 had over 100 people, focused on time sharing with their Tymnet system and spreading into Europe by the mid-70s, selling time on their systems until the cost of personal computing caught up and they were acquired by McDonnell Douglas in 1984. CompuServe began on a PDP-10 and began similarly but by the time they were acquired by H&R Block had successfully pivoted into a dial-up online services company and over time focused on selling access to the Internet. And they survived through to an era when they migrated their own proprietary tooling to HTML in the late 90s - although they were eventually merged into AOL and are now a part of Verizon media. So the pivot bought them an extra decade or so. Time sharing and BASIC proliferated across the country and then the world from Dartmouth. Much of this - and a lot of personal stories from the people involved can be found in Dr Joy Rankin’s “A People’s History of Computing in the United States.” Published in 2018, it’s a fantastic read that digs in deep on the ways that many of these systems evolved. There are other works, but she does a phenomenal job tying events into one another. One consistent point across her book is around societal impact. These pioneers democratized access to computing. Many of those who built businesses around time sharing missed the rapidly falling price of chips and the ready access to personal computers that were coming. They also missed that BASIC would be monetized by companies like Microsoft. But they brought computing to high schools in the area, established blueprints for teaching that are used through to this day, and as Grace Hopper did a generation before - made us think of even more ways to make programming more accessible to a new generation with BASIC. One other author of note here is John Kemeny. His book “Man and the computer” is a must read. He didn’t have the knowledge of the upcoming personal computing - but far more prophetic than not around cloud operations as we get back to a time sharing-esque model of computing. And we do owe him, Kurtz, and everyone else involved a huge debt for their work. Many others pushed the boundaries of what was possible with computers. They pushed the boundaries of what was possible with accessibility. And now we have ubiquity. So when we see something complicated. Something that doesn’t seem all that approachable. Maybe we should just wonder if - by some stretch - we can make it a bit more BASIC. Like they did.
10/14/202112 minutes, 1 second
Episode Artwork

eBay, Pez, and Immigration

We talk about a lot of immigrants in this podcast. There’s the Hungarian mathemeticians and scientists that helped usher in the nuclear age and were pivotal in the early days of computing. There are the Germans who found a safe haven in the US following World War II. There are a number of Jewish immigrants who fled persecution, like Jack Tramiel - a Holocaust survivor who founded Commodore and later took the helm at Atari. An Wang immigrated from China to attend Harvard and stayed. And the list goes on and on. Georges Doriot, the father of venture capital came to the US from France in 1899, also to go to Harvard. We could even go back further and look at great thinkers like Nikolai Tesla who emigrated from the former Austrian empire. And then there’s the fact that many Americans, and most of the greats in computer science, are immigrants if we go a generation or four back. Pierre Omidyar’s parents were Iranian. They moved to Paris so his mom could get a doctorate in linguistics at the famous Sorbonne. While in Paris, his dad became a surgeon, and they had a son. They didn’t move to the US to flee oppression but found opportunity in the new land, with his dad becoming a urologist at Johns Hopkins. He learned to program in high school and got paid to do it at a whopping 6 bucks an hour. Omidyar would go on to Tufts, where he wrote shareware to manage memory on a Mac. And then the University of California, Berkeley before going to work on the MacDraw team at Apple. He started a pen-computing company, then a little e-commerce company called eShop, which Microsoft bought. And then he ended up at General Magic in 1994. We did a dedicated episode on them - but supporting developers at a day job let him have a little side hustle building these newish web page things. In 1995, his girlfriend, who would become his wife, wanted to auction off (and buy) Pez dispensers online. So Omidyar, who’d been experimenting with e-commerce since eShop, built a little auction site. He called it auction web. But that was a little boring. They lived in the Bay Area around San Francisco and so he changed it to electronic Bay, or eBay for short. The first sale was a broken laser printer he had laying around that he originally posted for a dollar and after a week, went for $14.83. The site was hosted out of his house and when people started using the site, he needed to upgrade the plan. It was gonna’ cost 8 times the original $30. So he started to charge a nominal fee to those running auctions. More people continued to sell things and he had to hire his first employee, Chris Agarpao. Within just a year they were doing millions of dollars of business. And this is when they hired Jeffrey Skoll to be the president of the company. By the end of 1997 they’d already done 2 million auctions and took $6.7 million in venture capital from Benchmark Capital. More people, more weird stuff. But no guns, drugs, booze, Nazi paraphernalia, or legal documents. And nothing that was against the law. They were growing fast and by 1998 brought in veteran executive Meg Whitman to be the CEO. She had been a VP of strategy at Disney, then the CEO of FTD, then a GM for Playskool before that. By then, eBay was making $4.7 million a year with 30 employees. Then came Beanie Babies. And excellent management. They perfected the online auction model, with new vendors coming into their space all the time, but never managing to unseat the giant. Over the years they made onboarding fast and secure. It took minutes to be able to sell and the sellers are the ones where the money is made with a transaction fee being charged per sale, in addition to a nominal percentage of the transaction. Executives flowed in from Disney, Pepsi, GM, and anywhere they were looking to expand. Under Whitman’s tenure they weathered the storm of the dot com bubble bursting, grew from 30 to 15,000 employees, took the company to an IPO, bought PayPal, bought StubHub, and scaled the company up to handle over $8 billion in revenue. The IPO made Omidyar a billionaire. John Donahoe replaced Whitman in 2008 when she decided to make a run at politics, working on Romney and then McCain’s campaigns. She then ran for the governor of California and lost. She came back to the corporate world taking on the CEO position at Hewlett-Packard. Under Donahoe they bought Skype, then sold it off. They bought part of Craigslist, then tried to develop a competing product. And finally sold off PayPal, which is now a public entity of its own right. Over the years since, revenues have gone up and down. Sometimes due to selling off companies like they did with PayPal and later with StubHub in 2019. They now sit at nearly $11 billion in revenues, over 13,000 employees, and are a mature business. There are still over 300,000 listings for Beanie Babies. And to the original inspiration over 50,000 listings for the word Pez. Omidyar has done well, growing his fortune to what Forbes estimated to be just over $13 billion dollars. Much of which he’s pledged to give away during his lifetime, having joined the Bill Gates and Warren Buffet giving pledge. So far, he’s given away well over a billion with a focus in education, governance, and citizen engagement. Oh and this will come as no surprise, helping fund consumer and mobile access to the Internet. Much of this giving is funneled through the Omidyar Network. The US just evacuated over 65,000 Afghans following the collapse of that government. Many an oppressive government runs off the educated, those who are sometimes capable of the most impactful dissent. Some of the best and most highly skilled of an entire society leaves a vacuum in regions that further causes a collapse. And yet finding a home in societies known for inclusion and opportunity, and being surrounded by inspiring stories of other immigrants who made a home and took advantage of opportunity. Or whose children could. Those melting pots in the history of science are when diversity of human and discipline combine to make society for everyone better. Even in the places they left behind. Anyone who’s been to Hungary or Poland or Germany - places where people once fled - can see it in the street every time people touch a mobile device and are allowed to be whomever they want to be. Thank you to the immigrants, past and future, for joining us to create a better world. I look forward to welcoming the next wave with open arms.
10/7/20219 minutes, 46 seconds
Episode Artwork

Ross Perot For President

Ross Perot built two powerhouse companies and changed the way politicians communicate with their constituents. Perot was an Eagle Scout who went on to join the US Naval Academy in 1949, and served in the Navy until the late 1950s. He then joined the IBM sales organization and one year ended up meeting his quota in the second week of the year. He had all kinds of ideas for new things to do and sell, but no one was interested. So he left and formed a new company called Electronic Data Systems, or EDS, in 1962. You see, these IBM mainframes weren’t being used for time sharing so most of the time they were just sitting idle. So he could sell the unused time from one company to another. Perot learned from the best. As with IBM he maintained a strict dress code. Suits, no facial hair, and a high and tight crew cut as you’d find him still sporting years after his Navy days. And over time they figured out many of these companies didn’t have anyone capable of running these machines in the first place, so they could also step in and become a technology outsourcer, doing maintenance and servicing machines. Not only that, but they were perfectly situated to help process all the data from the new Medicare and Medicaid programs that were just starting up. States had a lot of new paperwork to process and that meant computers. He hired Morton Meyerson out at Bell Helicopter in 1966, who would become the president and effectively created the outsourcing concept in computing. Meyerson would become the president of EDS before leaving to take a series of executive roles at other organizations, including the CTO at General Motors in the 1980s before retiring. EDS went public in 1968. He’d taken $1,000 in seed money from his wife Margot to start the company, and his stake was now worth $350 million, which would rise sharply in the ensuing years as the company grew. By the 1970s they were practically printing cash. They were the biggest insurance data provider and added credit unions then financial markets and were perfectly positioned to help build the data networks that ATMs and point of sale systems would use. By the start of 1980 they were sitting on a quarter billion dollars in revenues and 8,000 employees. They continued to expand into new industries with more transactional needs, adding airlines and travel. He sold in 1984 to General Motors for $2.5 billion and Perot got $700 million personally. Meyerson stayed on to run the company and by 1990 their revenues topped $5 billion and neared 50,000 employees. Perot just couldn’t be done in business. He was good at it. So in 1988 he started another firm, Perot Systems. The company grew quickly. Perot knew how to sell, how to build sales teams, and how to listen to customers and build services products they wanted. Perot again looked for an effective leader and tapped Meyerson yet again, who became the CEO of Perot Systems from 1992 to 1998. Perot’s son Ross Jr took over the company. In 2008, EDS and their 170,000 employees was sold to Hewlett-Packard for $13.9 billion and in 2009 Perot Systems was sold to Dell for $3.9 billion. Keep in mind that Morton Meyerson was a mentor to Michael Dell. When they were sold, Perot Systems had 23,000 employees and $2.8 billion in revenues. That’s roughly a 1.4x multiple of revenues, which isn’t as good as the roughly 2x multiple Perot got off EDS - but none too shabby given that by then multiples were down for outsourcers. Based on his work and that of others, they’d built two companies worth nearly $20 billion - before 2010, employing nearly 200,000 people. Along the way, Perot had some interesting impacts other than just building so many jobs for so many humans. He passed on an opportunity to invest in this little company called Microsoft. So when Steve Jobs left Apple and looked for investors he jumped on board, pumping $20 million into NeXT Computer, and getting a nice exit when the company went to Apple for nearly half a billion. Perot was philanthropic. He helped a lot of people coming home from various armed services in his lifetime. He was good to those he loved. He gave $10 million to have his friend Morton Meyerson’s name put on the Dallas Symphony Orchestra’s Symphony Center. And he was interested in no BS politics. Yet politics had been increasingly polarized since Nixon. So Perot also ran for president of the US in 1992, against George Bush and Bill Clinton. He didn’t win but he flooded the airwaves with common sense arguments about government inefficiency and a declining market for doing business. He showed computer graphics with all the charts and graphs you can imagine. And while he didn’t get even one vote in the electoral college did manage to get 19 percent of the vote. His message was one of populism. Take the country back, stop deficit spending just like he ran his companies, and that persists with various wings of especially the Republican Party to this day. Especially in Perot’s home state of Texas. He didn’t win, but he effectively helped define the Contract with America that that Newt Gingrich and the 90s era of oversized suit jacket Republicans used to as a strategy. He argued for things to help the common people - not politicians. Ironically, those that took much of his content actually did just the opposite, slowed down the political machine by polarizing the public. And allowed deficit spending to increase on their watch. He ran again in 1996 but this time got far less votes and didn’t end up running for office again. He had a similar impact on IBM. Around 30 years after leaving the company, his success in services was one of the many inspirations for IBM pivoting into services as well. By then the services industry was big enough for plenty of companies to thrive and while sales could be competitive they all did well as personal computing put devices on desks across the world and those devices needed support. Perot died in 2019, one of the couple hundred richest people in the US. Navy Lieutenant. Founder. Philanthropist. Texan. Father. Husband. His impact on the technology industry was primarily around seeing waste. Wasted computing time. Wasted staffing where more efficient outsourcing paradigms were possible. He inspired massive shifts in the industry that persist to this day.
9/30/202110 minutes, 56 seconds
Episode Artwork

The Osborne Effect

The Osborne Effect isn’t an episode about Spider-Man that covers turning green or orange and throwing bombs off little hoverboards. Instead it’s about the impact of The Osborne 1 computer on the history of computers. Although many might find discussing the Green Goblin or Hobgoblin much more interesting. The Osborne 1 has an important place in the history of computing because when it was released in 1981, it was the first portable computer that found commercial success. Before the Osborne, there were portable teletype machines for sure, but computers were just starting to get small enough that a fully functional machine could be taken on an airplane. It ran 2.2 of the CP/M operating system and came with a pretty substantial bundle of software. Keep in mind, there weren’t internal hard drives in machines like this yet but instead CP/M was a set of floppies. It came with MBASIC from Microsoft, dBASE II from Ashton-Tate, the WordStar word processor, SuperCalc for spreadsheets, the Grammatik grammar checker, the Adventure game, early ledger tools from PeachTree Software, and tons of other software. By bundling so many titles, they created a climate where other vendors did the same thing, like Kaypro. After all, nothing breeds competitors like the commercial success of a given vendor. The Osborne was before flat panel screens so had a built-in CRT screen. This and the power supply and the heavy case meant it weighed almost 25 pounds and came in at just shy of $1,800. Imagine two disk drives with a 5 inch screen in the middle. The keyboard, complete with a full 10-key pad, was built into a cover that could be pulled off and used to interface with the computer. The whole thing could fit under a seat on an airplane. Airplane seats were quite a bit larger than they are today back then! We think of this as a luggable rather than a portable because of that and because computers didn’t have batteries yet. Instead it pulled up to 37 watts of power. All that in a 20 inch wide case that stood 9 inches tall. The two people most commonly associated with the Osborne are Adam Osborne and Lee Felsenstein. Osborne got his PhD from the University of Delaware in 1968 and went to work in chemicals before he moved to the Bay Area and started writing books about computers and started a company called Osborne and Associates to write computer books. He sold that to McGraw-Hill in 1979. By then he’d been hanging around the Homebrew Computer Club for a few years and there were some pretty wild ideas floating around. He saw Jobs and Wozniak demo the Apple I and watched their rise. Founders and engineers from Cromemco, IMSAI, Tiny BASIC, and Atari were also involved there - mostly before any of those products were built. So with the money from McGraw-Hill and sales of some of his books like An Introduction To Microcomputers, he set about thinking through what he could build. Lee Felsenstein was another guy from that group who’d gotten his degree in Computer Science at Berkeley before co-creating Community Memory, a project to build an early bulletin board system on top of a SDS 940 timesharing mainframe with links to terminals like a Teletype Model 33 sitting at Leopold’s Records in Berkeley. That had started up back in 1973 when Doug Englebart donated his machine from The Mother of All Demos and eventually moved to minicomputers as those became more available. Having seen the world go from a mainframe the size of a few refrigerators to minicomputers and then to early microcomputers like the Altair, when a hardware hacker like Felsenstein paired up with someone with a little seed money like Osborne, magic was bound to happen. The design was similar to the NoteTaker that Alan Kay had built at Xerox in the 70s - but hacked together from parts they could find. Like 5 inch Fujitsu floppy drives. They made 10 prototypes with metal cases and quickly moved to injection molded plastic cases, taking them to the 1981 West Coast Computer Faire and getting a ton of interest immediately. Some thought the screen was a bit too small but at the time the price justified the software alone. By the end of 1981 they’d had months where they did a million dollars in sales and they fired up the assembly line. People bought modems to hook to the RS-232 compatible serial port and printers to hook to the parallel port. Even external displays. Sales were great. They were selling over 10,000 computers a month and Osborne was lining up more software vendors, offering stock in the Osborne Computer Corporation. By 1983 they were preparing to go public and developing a new line of computers, one of which was the Osborne Executive. That machine would come with more memory, a slightly larger screen, an expansion slot and of course more software using sweetheart licensing deals that accompanied stock in the company to keep the per-unit cost down. He also announced the Vixen - same chipset but lighter and cheaper. Only issue is this created a problem, which we now call the Osborne Effect. People didn’t want the Osborne 1 any more. Seeing something new was on the way, people cancelled their orders in order to wait for the Executive. Sales disappeared almost overnight. At the time, computer dealers pushed a lot of hardware and the dealers didn’t want to have all that stock of an outdated model. Revenue disappeared and this came at a terrible time. The market was changing. IBM showed up with a PC, Apple had the Lisa and were starting to talk about the Mac. KayPro had come along as a fierce competitor. Other companies had clued in on the software bundling idea. The Compaq portable wasn’t far away. The company ended up cancelling the IPO and instead filing for bankruptcy. They tried to raise money to build a luggable or portable IBM clone - and if they had done so maybe they’d be what Compaq is today - a part of HP. The Osborne 1 was cannibalized by the Osborne Executive that never actually shipped. Other companies would learn the same lesson as the Osborne Effect throughout history. And yet the Osborne opened our minds to this weird idea of having machines we could take with us on airplanes. Even if they were a bit heavy and had pretty small screens. And while the timing of announcements is only one aspect of the downfall of the company, the Osborne Effect is a good reminder to be deliberate about how we talk about future products. Especially for hardware but we also have to be careful not to sell features that don’t exist yet in software.
9/26/20219 minutes, 43 seconds
Episode Artwork

Chess Throughout The History Of Computers

Chess is a game that came out of 7th century India, originally called chaturanga. It evolved over time, perfecting the rules - and spread to the Persians from there. It then followed the Moorish conquerers from Northern Africa to Spain and from there spread through Europe. It also spread from there up into Russia and across the Silk Road to China. It’s had many rule formations over the centuries but few variations since computers learned to play the game. Thus, computers learning chess is a pivotal time in the history of the game. Part of chess is thinking through every possible move on the board and planning a strategy. Based on the move of each player, we can review the board, compare the moves to known strategies, and base our next move on either blocking the strategy of our opponent or carrying out a strategy of our own to get a king into checkmate. An important moment in the history of computers is when computers got to the point that they could beat a chess grandmaster. That story goes back to an inspiration from the 1760s where Wolfgang von Kempelen built a machine called The Turk to impress Austrian Empress Maria Theresa. The Turk was a mechanical chess playing robot with a Turkish head in Ottoman robes that moved pieces. The Turk was a maze of cogs and wheals and moved the pieces during play. It travelled through Europe, beating the great Napoleon Bonaparte and then the young United States, also besting Benjamin Franklin. It had many owners and they all kept the secret of the Turk. Countless thinkers wrote about theories about how it worked, including Edgar Allen Poe. But eventually it was consumed by fire and the last owner told the secret. There had been a person in the box moving the pieces the whole time. All those moving parts were an illusion. And still in 1868 a knockoff of a knockoff called Ajeeb was built by a cabinet maker named Charles Hooper. Again, people like Theodore Roosevelt and Harry Houdini were bested, along with thousands of onlookers. Charles Gumpel built another in 1876 - this time going from a person hiding in a box to using a remote control. These machines inspired people to think about what was possible. And one of those people was Leonardo Torres y Quevedo who built a board that also had electomagnets move pieces and light bulbs to let you know when the king was in check or mate. Like all good computer games it also had sound. He started the project in 1910 and by 1914 it could play a king and rook endgame, or a game where there are two kings and a rook and the party with the rook tries to get the other king into checkmate. At the time even a simplified set of instructions was revolutionary and he showed his invention off at the Paris where notable other thinkers were at a conference, including Norbert Weiner who later described how minimax search could be used to play chess in his book Cybernetics. Quevedo had built an analytical machine based on Babbage’s works in 1920 but adding electromagnets for memory and would continue building mechanical or analog calculating machines throughout his career. Mikhail Botvinnik was 9 at that point and the Russian revolution wound down in 1923 when the Soviet Union was founded following the fall of the Romanovs. He would become the first Russian Grandmaster in 1950, in the early days of the Cold War. That was the same year Claude Shannon wrote his seminal work, “Programming a Computer for Playing Chess.” The next year Alan Turing actually did publish executable code to play on a Ferranti Mark I but sadly never got to see it complete before his death. The prize to actually play a game would go to Paul Stein and Mark Wells in 1956 working on the MANIAC. Due to the capacity of computers at the time, the board was smaller but the computer beat an actual human. But the Russians were really into chess in the years that followed the crowing of their first grandmaster. In fact it became a sign of the superior Communist politic. Botvinnik also happened to be interested in electronics, and went to school in Leningrad University's Mathematics Department. He wanted to teach computers to play a full game of chess. He focused on selective searches which never got too far as the Soviet machines of the era weren’t that powerful. Still the BESM managed to ship a working computer that could play a full game in 1957. Meanwhile John McCarthy at MIT introduced the idea of an alpha-beta search algorithm to minimize the number of nodes to be traversed in a search and he and Alan Kotok shipped A Chess Playing Program for the IBM 7090 Computer, which would be updated by Richard Greenblatt when moving from the IBM mainframes to a DEC PDP-6 in 1965, as a side project for his work on Project MAC while at MIT. Here we see two things happening. One we are building better and better search algorithms to allow for computers to think more moves ahead in smarter ways. The other thing happening was that computers were getting better. Faster certainly, but more space to work with in memory, and with the move to a PDP, truly interactive rather than batch processed. Mac Hack VI as Greenblatt’s program would eventually would be called, added transposition tables - to show lots of previous games and outcomes. He tuned the algorithms, what we would call machine learning today, and in 1967 became the first computer program to defeat a person at the tournament level and get a chess rating. For his work, Greenblatt would become an honorary member of the US Chess Federation. By 1970 there were enough computers playing chess to have the North American Computer Chess Championships and colleges around the world started holding competitions. By 1971 Ken Thompson of Bell Labs, in a sign of the times, wrote a computer chess game for Unix. And within just 5 years we got the first chess game for the personal computer, called Microchess. From there computers got incrementally better at playing chess. Computer games that played chess shipped to regular humans, dedicated physical games, little cheep electronics knockoffs. By the 80s regular old computers could evaluate thousands of moves. Ken Thompson kept at it, developing Belle from 1972 and it continued on to 1983. He and others added move generators, special circuits, dedicated memory for the transposition table, and refined the alpha-beta algorithm started by McCarthy, getting to the point where it could evaluate nearly 200,000 moves a second. He even got the computer to the rank of master but the gains became much more incremental. And then came IBM to the party. Deep Blue began with researcher Feng-hsiung Hsu, as a project called ChipTest at Carnegie Mellon University. IBM Research asked Hsu and Thomas Anantharamanto complete a project they started to build a computer program that could take out a world champion. He started with Thompson’s Belle. But with IBM’s backing he had all the memory and CPU power he could ask for. Arthur Hoane and Murray Campell joined and Jerry Brody from IBM led the team to sprint towards taking their device, Deep Thought, to a match where reigning World Champion Gary Kasparov beat the machine in 1989. They went back to work and built Deep Blue, which beat Kasparov in their third attempt in 1997. Deep Blue was comprised of 32 RS/6000s running 200 MHz chips, split across two racks, and running IBM AIX - with a whopping 11.38 gigaflops of speed. And chess can be pretty much unbeatable today on an M1 MacBook Air, which comes pretty darn close to running at a teraflop. Chess gives us an unobstructed view at the emergence of computing in an almost linear fashion. From the human powered codification of electromechanical foundations of the industry to the emergence of computational thinking with Shannon and cybernetics to MIT on IBM servers when Artificial Intelligence was young to Project MAC with Greenblatt to Bell Labs with a front seat view of Unix to college competitions to racks of IBM servers. It even has little misdirections with pre-World War II research from Konrad Zuse, who wrote chess algorithms. And the mechanical Turk concept even lives on with Amazon’s Mechanical Turk services where we can hire people to do things that are still easier for humans than machines.
9/16/202112 minutes, 58 seconds
Episode Artwork

Sage: The Semi-Automatic Ground Environment Air Defense

The Soviet Union detonated their first nuclear bomb in 1949, releasing 20 kilotons worth of an explosion and sparking the nuclear arms race. A weather reconnaissance mission confirmed that the Soviets did so and Klaus Fuchs was arrested for espionage, after passing blueprints for the Fat Man bomb that had been dropped on Japan. A common name in the podcast is Vannevar Bush. At this point he was the president of the Carnegie Institute and put together a panel to verify the findings. The Soviets were catching up to American science. Not only did they have a bomb but they also had new aircraft that were capable of dropping a bomb. People built bomb shelters, schools ran drills to teach students how to survive a nuclear blast and within a few years we’d moved on to the hydrogen bomb. And so the world lived in fear of nuclear fall-out. Radar had come along during World War II and we’d developed Ground Control of Intercept, an early radar network. But that wouldn’t be enough to protect against this new threat. If one of these Soviet bombers, like the Tupolev 16 “Badger” were to come into American airspace, the prevailing thought was that we needed to shoot it down before the payload could be delivered. The Department of Defense started simulating what a nuclear war would look like. And they asked the Air Force to develop an air defense system. Given the great work done at MIT, much under the careful eye of Vannevar Bush, they reached out to George Valley, a professor in the Physics Department who had studied nuclear weapons. He also sat on the Air Force Scientific Advisory Board, and toured some of the existing sites and took a survey of the US assets. He sent his findings and they eventually made their way to General Vandenberg, who assigned General Fairchild to assemble a committee which would become the Valley Committee, or more officially the Air Defense Systems Engineering Committee, or ADSEC. ADSEC dug in deeper and decided that we needed a large number of radar stations with a computer that could aggregate and then analyze data to detect enemy aircraft in real time. John Harrington had worked out how to convert radar into code and could send that over telephone lines. They just needed a computer that could crunch the data as it was received. And yet none of the computer companies at the time were able to do this kind of real time operation. We were still in a batch processing mainframe world. Jay Forrester at MIT was working on the idea of real-time computing. Just one problem, the Servomechanisms lab where he was working on Project Whirlwind for the Navy for flight simulation was over budget and while they’d developed plenty of ground-breaking technology, they needed more funding. So Forrester was added to ADSEC and added the ability to process the digital radar information. By the end of 1950, the team was able to complete successful tests of sending radar information to Whirlwind over the phone lines. Now it was time to get funding, which was proposed at $2 million a year to fund a lab. Given that Valley and Forrester were both at MIT, they decided it should be at MIT. Here, they saw a way to help push the electronics industry forward and the Navy’s Chief Scientist Louis Ridenour knew that wherever that lab was built would become a the next scientific hotspot. The president at MIT at the time, James Killian, wasn’t exactly jumping on the idea of MIT becoming an arm of the department of defense so put together 28 scientists to review the plans from ADSEC, which became Project Charles and threw their support to forming the new lab. They had measured twice and were ready to cut. There were already projects being run by the military during the arms buildup named after other places surrounding MIT so they picked Project Lincoln for the name of the project to Project Lincoln. They appointed F Wheeler Loomis as the director with a mission to design a defense system. As with all big projects, they broke it up into five small projects, or divisions; things like digital computers, aircraft control and warning, and communications. A sixth did the business administration for the five technical divisions and another delivered technical services as needed. They grew to over 300 people by the end of 1951 and over 1,300 in 1952. They moved offsite and built a new campus - thus establishing Lincoln Lab. By the end of 1953 they had written a memo called A Proposal for Air Defense System Evolution: The Technical Phase. This called for a net of radars to be set up that would track the trajectory of all aircraft in the US airspace and beyond. And to build communications to deploy the weapons that could destroy those aircraft. The Manhattan project had brought in the nuclear age but this project grew to be larger as now we had to protect ourselves from the potential devastation we wrought. We were firmly in the Cold War with America testing the hydrogen bomb in 52 and the Soviets doing so in 55. That was the same year the prototype of the AN/FSQ-7 to replace Whirlwind. To protect the nation from these bombs they would need 100s of radars, 24 centers to receive data, and 3 combat centers. They planned for direction centers to have a pair of AN/FSQ-7 computers, which were the Whirlwind evolved. That meant half a million lines of code which was by far the most ambitious software ever written. Forrester had developed magnetic-core memory for Whirlwind. That doubled the speed of the computer. They hired IBM to build the AN/FSQ-7 computers and from there we started to see commercial applications as well when IBM added it to the 704 mainframe in 1955. Stalin was running labor camps and purges. An estimated nine million people died in Gulags or from hunger. Chairman Mao visited Moscow in 1957, sparking the Great Leap Forward policy that saw 45 million people die. All in the name of building a utopian paradise. Americans were scared. And Stalin was distrustful of computers for any applications beyond scientific computing for the arms race. By contrast, people like Ken Olsen from Lincoln Lab left to found Digital Equipment Corporation and sell modular mini-computers on the mass market, with DEC eventually rising to be the number two computing company in the world. The project also needed software and so that was farmed out to Rand who would have over 500 programmers work on it. And a special display to watch planes as they were flying, which began as a Stromberg-Carlson Charactron cathode ray tube. IBM got to work building the 24 FSQ-7s, with each coming in at a whopping 250 tons and nearly 50,000 vacuum tubes - and of course that magnetic core memory. All this wasn’t just theoretical. Given the proximity, they deployed the first net of around a dozen radars around Cape Cod as a prototype. They ran dedicated phone lines from Cambridge and built the first direction center, equipping it with an interactive display console that showed an x for each object being tracked, adding labels and then Robert Everett came up with the idea of a light gun that could be used as a pointing device, along with a keyboard, to control the computers from a terminal. They tested the Cape Cod installation in 1953 and added long range radars in Maine and New York by the end of 1954, working out bugs as they went. The Suffolk County Airfield in Long Island was added so Strategic Air Command could start running exercises for response teams. By the end of 1955 they put the system to the test and it passed all requirements from the Air Force. The radars detected the aircraft and were able to then control manned antiaircraft operations. By 1957 they were adding logic and capacity to the system, having fine tuned over a number of test runs until they got to a 100 percent interception rate. They were ready to build out the direction centers. The research and development phase was done - now it was time to produce an operational system. Western Electric built a network of radar and communication systems across Northern Canada that became known as the DEW line, short for Distant Early Warning. They added increasingly complicated radar, layers of protection, like Buckminster Fuller joining for a bit to develop a geodesic dome to protect the radars using fiberglass. They added radar to what looked like oil rigs around Texas, experimented with radar on planes and ships, and how to connect those back to the main system. By the end of 1957 the system was ready to move into production and integration with live weapons into the code and connections. This is where MIT was calling it done for their part of the program. Only problem is when the Air Force looked around for companies willing to take on such a large project, no one could. So MITRE corporation was spun out of Lincoln Labs pulling in people from a variety of other government contractors and continues on to this day working on national security, GPS, election integrity, and health care. They took the McChord airfare online as DC-12 in 1957, then Syracuse New York in 1958 and started phasing in automated response. Andrews, Dobbins, Geiger Field, Los Angeles Air Defense Sector, and others went online over the course of the next few years. The DEW line went operational in 1962, extending from Iceland to the Aleutians. By 1963, NORAD had a Combined Operations Center where the war room became reality. Burroughs eventually won a contract to deploy new D825 computers to form a system called BUIC II and with the rapidly changing release of new solid state technology those got replaced with a Hughes AN/TSQ-51. With the rise of Airborn Warning and Control Systems (AWACS), the ground systems started to slowly get dismantled in 1980, being phased out completely in 1984, the year after WarGames was released. In WarGames, Matthew Broderick plays David Lightman, a young hacker who happens upon a game. One Jon Von Neumann himself might have written as he applied Game Theory to the nuclear threat. Lightman almost starts World War III when he tries to play Global Thermonuclear War. He raises the level of DEFCON and so inspires a generation of hackers who founded conferences like DEFCON and to this day war dial, or war drive, or war whatever. The US spent countless tax money on advancing technology in the buildup for World War II and the years after. The Manhattan Project, Project Whirlwind, SAGE, and countless others saw increasing expenditures. Kennedy continued the trend in 1961 when he started the process of putting humans on the moon. And the unpopularity of the Vietnam war, which US soldiers had been dying in since 1959, caused a rollback of spending. The legacy of these massive projects was huge spending to advance the sciences required to produce each. The need for these computers in SAGE and other critical infrastructure to withstand a nuclear war led to ARPANET, which over time evolved into the Internet. The subsequent privatization of these projects, the rapid advancement in making chips, and the drop in costs while frequent doubling of speeds based on findings from each discipline finding their way into others then gave us personal computing and the modern era of PCs then mobile devices. But it all goes back to projects like ENIAC, Whirlwind, and SAGE. Here, we can see generations of computing evolve with each project. I’m frequently asked what’s next in our field. It’s impossible to know exactly. But we can look to mega projects, many of which are transportation related - and we can look at grants from the NSF. And DARPA and many major universities. Many of these produce new standards so we can also watch for new RFCs from the IETF. But the coolest tech is probably classified, so ask again in a few years! And we can look to what inspires - sometimes that’s a perceived need, like thwarting nuclear war. Sometimes mapping human genomes isn’t a need until we need to rapidly develop a vaccine. And sometimes, well… sometimes it’s just returning to some sense of normalcy. Because we’re all about ready for that. That might mean not being afraid of nuclear war as a society any longer. Or not being afraid to leave our homes. Or whatever the world throws at us next.
9/9/202118 minutes, 10 seconds
Episode Artwork

IBM Pivots To Services In The 90s

IBM is the company with nine lives. They began out of the era of mechanical and electro-mechanical punch card computing. They helped bring the mainframe era to the commercial market. They played their part during World War II. They helped make the transistorized computer mainstream with the S360. They helped bring the PC into the home. We’ve covered a number of lost decades - and moving into the 90s, IBM was in one. One that was largely created by an influx of revenues with the personal computer business. That revenue gave IBM a shot in the arm. But one that was temporary. By the early 90s the computer business was under assault by the clone makers. They had been out-maneuvered by Microsoft and the writing was on the wall that Big Blue was in trouble. The CEO who presided during the fall of the hardware empire was John Akers. At the time, IBM had their fingers in every cookie jar. They were involved with instigating the Internet. They made mainframes. They made PCs. They made CPUs. They made printers. They provided services. How could they be in financial trouble? Because their core business, making computers, was becoming a commodity and quickly becoming obsolete. IBM loves to own an industry. But they didn’t own PCs any more. They never owned PCs in the home after the PC Jr flopped. And mainframes were quickly going out of style. John Akers had been a lifer at IBM and by then there was generations of mature culture and its byproduct bureaucracy to contend with. Akers simply couldn’t move the company fast enough. The answer was to get rid of John Akers and bring in a visionary. The visionaries in the computing field didn’t want IBM. CEOs like John Sculley at Apple and Bill Gates at Microsoft turned them down. That’s when someone at a big customer came up. Louis Gerstner. He had been the CEO of American Express and Nabisco. He had connections to IBM, with his brother having run the PC division for a time. And he was the first person brought in from the outside to run the now-nearly 100 year old company. And the first of a wave of CEOs paid big money. Commonplace today. Starting in 1993, he moved from an IBM incapable of making decisions because of competing visions to one where execution and simplification was key. He made few changes in the beginning. At the time, competitor CDC was being split up into smaller companies and lines of business were being spun down as they faced huge financial losses. John Akers had let each division run itself - Gerstner saw the need for services given all this off-the-shelf tech being deployed in the 90s. The industry was standardizing, making it ripe for re-usable code that could run on this standardized hardware but then sold with a lot of services to customize it for each customer. In other words, it was time for IBM to become an integrator. One that could deliver a full stack of solutions. This meant keeping the company as one powerhouse rather than breaking it up. You see, buy IBM kit, have IBM supply a service, and then IBM could use that as a wedge to sell more and more automation services into the companies. Each aspect on its own wasn’t hugely profitable, but combined - much larger deal sizes. And given IBMs piece of the internet, it was time for e-commerce. Let that Gates kid have the operating system market and the clone makers have the personal computing market in their races to the bottom. He’d take the enterprise - where IBM was known and trusted and in many sectors loved. And he’d take what he called e-business, which we’d call eCommerce today. He brought in Irving Wladowsky-Berger and they spent six years pivoting one of the biggest companies in the world into this new strategy. The strategy also meant streamlining various operations. Each division previously had the autonomy to pick their own agency. He centralized with Ogilvy & Mather. One brand. One message. Unlike Akers he didn’t have much loyalty to the old ways. Yes, OS/2 was made at IBM but by the time Windows 3.11 shipped, IBM was outmaneuvered and in so one of his first moves was to stop development of OS/2 in 1994. They didn’t own the operating system market so they let it go. Cutting divisions meant there were a lot of people who didn’t fit in with the new IBM any longer. IBM had always hired people for life. Not any more. Over the course of his tenure over 100,000 people were laid off. According to Gerstner they’d grown lazy because performance didn’t really matter. And the high performers complained about the complacency. So those first two years came as a shock. But he managed to stop hemorrhaging cash and start the company back on a growth track. Let’s put this perspective. His 9 years saw the companies market cap nearly quintuple. This in a company that was founded in 1911 so by then 72 years old. Microsoft, Dell, and so many others grew as well. But a rising tide lifts all boats. Gerstner brought ibm back. But withdrew from categories that would take over the internet. He was paid hundreds of millions of dollars for his work. There were innovative new products in his tenure. The Simon Personal Communicator in 1994. This was one of the earliest mobile devices. Batteries and cellular technology weren’t where they needed to be just yet but it certainly represented a harbinger of things to come. IBM introduced the PC Jr all the way back in 1983 and killed it off within two years. But they’d been selling into retail the whole time. So he killed that off and by 2005 IBM pulled out of PCs entirely, selling the division off to Lenovo. A point I don’t think I’ve ever seen made is that Akers inherited a company embroiled in an anti-trust case. The Justice Department filed the case in 1975 and it ran until 1982 eating up thousands of hours of testimony across nearly a thousand witnesses. Akers took over in 1985 and by then IBM was putting clauses in every contract that allowed companies like Microsoft, Sierra Online, and everyone else involved with PCs to sell their software, services, and hardware to other vendors. This opened the door for the clone makers to take the market away after IBM had effectively built the ecosystem and standardized the hardware and form factors that would be used for decades. Unlike Akers, Gerstner inherited an IBM in turmoil - and yet with some of the brightest minds in the world. They had their fingers in everything from the emerging public internet to mobile devices to mainframes to personal computers. He gave management bonuses when they did well and wasn’t afraid to cut divisions, which in his book he says that only an outsider could do. This formalized into three “personal business commitments” that contributed to IBM strategies. He represented a shift not only at IBM but across the industry. The computer business didn’t require PhD CEOs as the previous generations had. Companies could manage the market and change cultures. Companies could focus on doing less and sell assets (like lines of business) off to raise cash to focus. Companies didn’t have to break up, as CDC had done - but instead could re-orient around a full stack of solutions for a unified enterprise. An enterprise that has been good to IBM and others who understand what they need ever since. The IBM turnaround out of yet another lost decade showed us options for large megalith organizations that maybe previously thought different divisions had to run with more independence. Some should - not all. Most importantly though, the turnaround showed us that a culture can change. It’s one of the hardest things to do. Part of that was getting rid of the dress code and anti-alcohol policy. Part of that was performance-based comp. Part of that was to show leaders that consensus was slow and decisions needed to be made. Leaders couldn’t be perfect but a fast decision was better than one that held up business. As with the turnaround after Apple’s lost decade, the turnaround was largely attributable to one powerful personality. Gerstner often shied away from the media. Yet he wrote a book about his experiences called Who Says Elephants Can’t Dance. Following his time at IBM he became the chairman of the private equity firm The Carlyle Group, where he helped grow them into a powerhouse in leveraged buyouts, bringing in Hertz, Kinder Morgan, Freescale Semiconductor, Nielson Corporation, and so many others. One of the only personal tidbits you get about him in his book is that he really hates to lose. We’re all lucky he turned the company around as since he got there IBM has filed more patents than any other company for 28 consecutive years. These help push the collective conscious forward from 2,300 AI patents to 3,000 cloud patents to 1,400 security patents to laser eye surgery to quantum computing and beyond. 150,000 patents in the storied history of the company. That’s a lot of work to bring computing into companies and increase productivity at scale. Not at the hardware level, with the constant downward pricing pressures - but at the software + services layer. The enduring legacy of the changes Gerstner made at IBM.
9/6/202113 minutes, 39 seconds
Episode Artwork

Spam Spam Spam!

Today's episode on spam is read by the illustrious Joel Rennich. Spam is irrelevant or inappropriate and unsolicited messages usually sent to a large number of recipients through electronic means. And while we probably think of spam as something new today, it’s worth noting that the first documented piece of spam was sent in 1864 - through the telegraph. With the advent of new technologies like the fax machine and telephone, messages and unsolicited calls were quick to show up. Ray Tomlinson is widely accepted as the inventor of email, developing the first mail application in 1971 for the ARPANET. It took longer than one might expect to get abused, likely because it was mostly researchers and people from the military industrial research community. Then in 1978, Gary Thuerk at Digital Equipment Corporation decided to send out a message about the new VAX computer being released by Digital. At the time, there were 2,600 email accounts on ARPANET and his message found its way to 400 of them. That’s a little over 15% of the Internet at the time. Can you imagine sending a message to 15% of the Internet today? That would be nearly 600 million people. But it worked. Supposedly he closed $12 million in deals despite rampant complaints back to the Defense Department. But it was too late; the damage was done. He proved that unsolicited junk mail would be a way to sell products. Others caught on. Like Dave Rhodes who popularized MAKE MONEY FAST chains in the 1988. Maybe not a real name but pyramid schemes probably go back to the pyramids so we might as well have them on the Internets. By 1993 unsolicited email was enough of an issue that we started calling it spam. That came from the Monty Python skit where Vikings in a cafe and spam was on everything on the menu. That spam was in reference to canned meat made of pork, sugar, water, salt, potato starch, and sodium nitrate that was originally developed by Jay Hormel in 1937 and due to how cheap and easy it was found itself part of a cultural shift in America. Spam came out of Austin, Minnesota. Jay’s dad George incorporated Hormel in 1901 to process hogs and beef and developed canned lunchmeat that evolved into what we think of as Spam today. It was spiced ham, thus spam. During World War II, Spam would find its way to GIs fighting the war and Spam found its way to England and countries the war was being fought in. It was durable and could sit on a shelf for moths. From there it ended up in school lunches, and after fishing sanctions on Japanese-Americans in Hawaii restricted the foods they could haul in, spam found its way there and some countries grew to rely on it due to displaced residents following the war. And yet, it remains a point of scorn in some cases. As the Monty Python sketch mentions, spam was ubiquitous, unavoidable, and repetitive. Same with spam through our email. We rely on email. We need it. Email was the first real, killer app for the Internet. We communicate through it constantly. Despite the gelatinous meat we sometimes get when we expect we’re about to land that big deal when we hear the chime that our email client got a new message. It’s just unavoidable. That’s why a repetitive poster on a list had his messages called spam and the use just grew from there. Spam isn’t exclusive to email. Laurence Canter and Martha Siegel sent the first commercial Usenet spam in the “Green Card” just after the NSF allowed commercial activities on the Internet. It was a simple Perl script to sell people on the idea of paying a fee to have them enroll people into the green card lottery. They made over $100,000 and even went so far as to publish a book on guerrilla marketing on the Internet. Canter got disbarred for illegal advertising in 1997. Over the years new ways have come about to try and combat spam. RBLs, or using DNS blacklists to mark hosts as unable to send blacklists and thus having port 25 blocked emerged in 1996 from the Mail Abuse Prevention System, or MAPS. Developed by Dave Rand and Paul Vixie, the list of IP addresses helped for a bit. That is, until spammers realized they could just send from a different IP. Vixie also mentioned the idea of of matching a sender claim to a mail server a message came from as a means of limiting spam, a concept that would later come up again and evolve into the Sender Policy Framework, or SPF for short. That’s around the same time Steve Linford founded Spamhaus to block anyone that knowingly spams or provides services to spammers. If you have a cable modem and try to setup an email server on it you’ve probably had to first get them to unblock your address from their Don’t Route list. The next year Mark Jeftovic created a tool called filter.plx to help filter out spam and that project got picked up by Justin Mason who uploaded his new filter to SourceForge in 2001. A filter he called SpamAssassin. Because ninjas are cooler than pirates. Paul Graham, the co-creator of Y Combinator (and author a LISP-like programming language) wrote a paper he called “A Plan for Spam” in 2002. He proposed using a Bayesian filter as antivirus software vendors used to combat spam. That would be embraced and is one of the more common methods still used to block spam. In the paper he would go into detail around how scoring of various words would work and probabilities that compared to the rest of his email that a spam would get flagged. That Bayesian filter would be added to SpamAssassin and others the next year. Dana Valerie Reese came up with the idea for matching sender claims independently and she and Vixie both sparked a conversation and the creation of the Anti-Spam Research Group in the IETF. The European Parliament released the Directive on Privacy and Electronic Communications in the EU criminalizing spam. Australia and Canada followed suit. 2003 also saw the first laws in the US regarding spam. The CAN-SPAM Act of 2003 was signed by President George Bush in 2003 and allowed the FTC to regulate unsolicited commercial emails. Here we got the double-opt-in to receive commercial messages and it didn’t take long before the new law was used to prosecute spammers with Nicholas Tombros getting the dubious honor of being the first spammer convicted. What was his spam selling? Porn. He got a $10,000 fine and six months of house arrest. Fighting spam with laws turned international. Christopher Pierson was charged with malicious communication after he sent hoax emails. And even though spammers were getting fined and put in jail all the time, the amount of spam continued to increase. We had pattern filters, Bayesian filters, and even the threat of legal action. But the IETF Anti-Spam Research Group specifications were merged by Meng Weng Wong and by 2006 W. Schlitt joined the paper to form a new Internet standard called the Sender Policy Framework which lives on in RFC 7208. There are a lot of moving parts but at the heart of it, Simple Mail Transfer Protocol, or SMTP, allows sending mail from any connection over port 25 (or others if it’s SSL-enabled) and allowing a message to pass requiring very little information - although the sender or sending claim is a requirement. A common troubleshooting technique used to be simply telnetting into port 25 and sending a message from an address to a mailbox on a mail server. Theoretically one could take the MX record, or the DNS record that lists the mail server to deliver mail bound for a domain to and force all outgoing mail to match that. However, due to so much spam, some companies have dedicated outbound mail servers that are different than their MX record and block outgoing mail like people might send if they’re using personal mail at work. In order not to disrupt a lot of valid use cases for mail, SPF had administrators create TXT records in DNS that listed which servers could send mail on their behalf. Now a filter could check the header for the SMTP server of a given message and know that it didn’t match a server that was allowed to send mail. And so a large chunk of spam was blocked. Yet people still get spam for a variety of reasons. One is that new servers go up all the time just to send junk mail. Another is that email accounts get compromised and used to send mail. Another is that mail servers get compromised. We have filters and even Bayesian and more advanced forms of machine learning. Heck, sometimes we even sign up for a list by giving our email out when buying something from a reputable site or retail vendor. Spam accounts for over 90% of the total email traffic on the Internet. This is despite blacklists, SPF, and filters. And despite the laws and threats spam continues. And it pays well. We mentioned Canter & Sigel. Shane Atkinson was sending 100 million emails per day in 2003. That doesn’t happen for free. Nathan Blecharczyk, a co-founder of Airbnb paid his way through Harvard on the back of spam. Some spam sells legitimate products in illegitimate ways, as we saw with early IoT standard X10. Some is used to spread hate and disinformation, going back to Sender Argic, known for denying the Armenian genocide through newsgroups in 1994. Long before infowars existed. Peter Francis-Macrae sent spam to solicit buying domains he didn’t own. He was convicted after resorting to blackmail and threats. Jody Michael Smith sold replica watches and served almost a year in prison after he got caught. Some spam is sent to get hosts loaded with malware so they could be controlled as happened with Peter Levashov, the Russian czar of the Kelihos botnet. Oleg Nikolaenko was arrested by the FBI in 2010 for spamming to get hosts in his Mega-D botnet. The Russians are good at this; they even registered the Russian Business Network as a website in 2006 to promote running an ISP for phishing, spam, and the Storm botnet. Maybe Flyman is connected to the Russian oligarchs and so continues to be allowed to operate under the radar. They remain one of the more prolific spammers. Much is sent by a small number of spammers. Khan C. Smith sent a quarter of the spam in the world until he got caught in 2001 and fined $25 million. Again, spam isn’t limited to just email. It showed up on Usenet in the early days. And AOL sued Chris “Rizler” Smith for over $5M for his spam on their network. Adam Guerbuez was fined over $800 million dollars for spamming Facebook. And LinkedIn allows people to send me unsolicited messages if they pay extra, probably why Microsoft payed $26 billion for the social network. Spam has been with us since the telegraph; it isn’t going anywhere. But we can’t allow it to run unchecked. The legitimate organizations that use unsolicited messages to drive business help obfuscate the illegitimate acts where people are looking to steal identities or worse. Gary Thuerk opened a Pandora’s box that would have been opened if hadn’t of done so. The rise of the commercial Internet and the co-opting of the emerging cyberspace as a place where privacy and so anonymity trump verification hit a global audience of people who are not equal. Inequality breeds crime. And so we continually have to rethink the answers to the question of sovereignty versus the common good. Think about that next time an IRS agent with a thick foreign accent calls asking for your social security number - and remember (if you’re old enough) that we used to show our social security cards to grocery store clerks when we wrote checks. Can you imagine?!?!
8/26/202111 minutes, 42 seconds
Episode Artwork

Do You Yahoo!?

The simple story of Yahoo! Is that they were an Internet search company that came out of Stanford during the early days of the web. They weren’t the first nor the last. But they represent a defining moment in the rise of the web as we know it today, when there was enough content out there that there needed to be an easily searchable catalog of content. And that’s what Stanford PhD students David Philo and Jerry Yang built. As with many of those early companies it began as a side project called “Jerry and David's Guide to the World Wide Web.” And grew into a company that at one time rivaled any in the world. At the time there were other search engines and they all started adding portal aspects to the site growing fast until the dot-com bubble burst. They slowly faded until being merged with another 90s giant, AOL, in 2017 to form Oath, which got renamed to Verizon Media in 2019 and then effectively sold to investment management firm Apollo Global Management in 2021. Those early years were wild. Yang moved to San Jose in the 70s from Taiwan, and earned a bachelors then a masters at Stanford - where he met David Filo in 1989. Filo is a Wisconsin kid who moved to Stanford and got his masters in 1990. The two went to Japan in 1992 on an exchange program and came home to work on their PhDs. That’s when they started surfing the web. Within two years they started their Internet directory in 1994. As it grew they hosted the database on Yang’s student computer called akebono and the search engine on konishiki, which was Filo’s. They renamed it to Yahoo, short for Yet Another Hierarchical Officious Oracle - after all they maybe considered themselves Yahoos at the time. And so Yahoo began life as akebono.stanford.edu/~yahoo. Word spread fast and they’d already had a million hits by the end of 1994. It was time to move out of Stanford. Mark Andreesen offered to let them move into Netscape. They bought a domain in 1995 and incorporated the company, getting funding from Sequoia Capital raising $3,000,000. They tinkered with selling ads on the site to fund buying more servers but there was a lot of businessing. They decided that they would bring in Tim Koogle (which ironically rhymes with Google) to be CEO who brought in Jeff Mallett from Novell’s consumer division to be the COO. They were the suits and got revenues up to a million dollars. The idea of the college kids striking gold fueled the rise of other companies and Yang and Filo became poster children. Applications from all over the world for others looking to make their mark started streaming in to Stanford - a trend that continues today. Yet another generation was about to flow into Silicon Valley. First the chip makers, then the PC hobbyists turned businesses, and now the web revolution. But at the core of the business were Koogle and Mallett, bringing in advertisers and investors. And the next year needing more and more servers and employees to fuel further expansion, they went public, selling over two and a half million shares at $13 to raise nearly $34 million. That’s just one year after a gangbuster IPO from Netscape. The Internet was here. Revenues shot up to $20 million. A concept we repeatedly look at is the technological determinism that industries go through. At this point it’s easy to look in the rear view mirror and see change coming at us. First we document information - like Jerry and David building a directory. Then we move it to a database so we can connect that data. Thus a search engine. Given that Yahoo! was a search engine they were already on the Internet. But the next step in the deterministic application of modern technology is to replace human effort with increasingly sophisticated automation. You know, like applying basic natural language processing, classification, and polarity scoring algorithms to enrich the human experience. Yahoo! hired “surfers” to do these tasks. They curated the web. Yes, they added feeds for news, sports, finance, and created content. Their primary business model was to sell banner ads. And they pioneered the field. Banner ads mean people need to be on the site to see them. So adding weather, maps, shopping, classifieds, personal ads, and even celebrity chats were natural adjacencies given that mental model. Search itself was almost a competitor, sending people to other parts of the web that they weren’t making money off eyeballs. And they were pushing traffic to over 65 million pages worth of data a day. They weren’t the only ones. This was the portal era of search and companies like Lycos, Excite, and InfoSeek were following the same model. They created local directories and people and companies could customize the look and feel. Their first designer, David Shen, takes us through the user experience journey in his book Takeover! The Inside Story the Yahoo Ad Revolution. They didn’t invent pay-per-clic advertising but did help to make it common practice and proved that money could be made on this whole new weird Internet thing everyone was talking about. The first ad they sold was for MCI and from there they were practically printing money. Every company wanted in on the action - and sales just kept going up. Bill Clinton gave them a spot in the Internet Village during his 1997 inauguration and they were for a time seemingly synonymous with the Internet. The Internet was growing fast. Cataloging the Internet and creating content for the Internet became a larger and larger manual task. As did selling ads, which was a manual transaction requiring a larger and larger sales force. As with other rising internet properties, people dressed how they wanted, they’d stay up late building code or content and crash at the desk. They ran funny cheeky ads with that yodel - becoming a brand that people knew and many equated to the Internet. We can thank San Francisco’s Black Rocket ad agency for that. They grew fast. The founders made several strategic acquisitions and gobbled up nearly every category of the Internet that has each grown to billions of dollars. They bought Four 11 for $95 million in their first probably best acquisition, and used them to create Yahoo! Mail in 1997 and a calendar in 1998. They had over 12 million Yahoo! Email users by he end of the year, inching their way to the same number of AOL users out there. There were other tools like Yahoo Briefcase, to upload files to the web. Now common with cloud storage providers like Dropbox, Box, Google Drive, and even Office 365. And contacts and Messenger - a service that would run until 2018. Think of all the messaging apps that have come with their own spin on the service since. 1998 also saw the acquisition of Viaweb, founded by the team that would later create Y Combinator. It was just shy of a $50M acquisition that brought the Yahoo! Store - which was similar to the Shopify of today. They got a $250 million investment from Softbank, bought Yoyodyne, and launched AT&T’s WorldNet service to move towards AOL’s dialup services. By the end of the year they were closing in on 100 million page views a day. That’s a lot of banners shown to visitors. But Microsoft was out there, with their MSN portal at the height of the browser wars. Yahoo! bought Broadcast.com in 1999 saddling the world with Mark Cuban. They dropped $5.7 billion for 300 employees and little more than an ISDN line. Here, they paid over a 100x multiple of annual revenues and failed to transition sellers into their culture. Sales cures all. In his book We Were Yahoo! Jeremy Ring describes the lays much of the blame of the failure to capitalize on the acquisition as not understanding the different selling motion. I don’t remember him outright saying it was hubris, but he certainly indicates that it should have worked out and that broadcast.com was could have been what YouTube would become. Another market lost in a failed attempt at Yahoo TV. And yet many of these were trends started by AOL. They also bought GeoCities in 99 for $3.7 billion. Others have tried to allow for fast and easy site development - the no code wysiwyg web. GeoCities lasted until 2009 - a year after Google launched Google Sites. And we have Wix, Squarespace, WordPress, and so many others offering similar services today. As they grew some of the other 130+ search engines at the time folded. The new products continued. The Yahoo Notebook came before Evernote. Imagine your notes accessible to any device you could log into. The more banners shown, the more clicks. Advertisers could experiment in ways they’d never been able to before. They also inked distribution deals, pushing traffic to other site that did things they didn’t. The growth of the Internet had been fast, with nearly 100 million people armed with Internet access - and yet it was thought to triple in just the next three years. And even still many felt a bubble was forming. Some, like Google, had conserved cash - others like Yahoo! Had spent big on acquisitions they couldn’t monetize into truly adjacent cash flow generating opportunities. And meanwhile they were alienating web properties by leaning into every space that kept eyeballs on the site. By 2000 their stock traded at $118.75 and they were the most valuable internet company at $125 billion. Then as customers folded when the dot-com bubble burst, the stock fell to $8.11 the next year. One concept we talk about in this podcast is a lost decade. Arguably they’d entered into theirs around the time the dot-com bubble burst. They decided to lean into being a media company even further. Again, showing banners to eyeballs was the central product they sold. They brought in Terry Semel in 2001 using over $100 million in stock options to entice him. And the culture problems came fast. Semel flew in a fancy jet, launched television shows on Yahoo! and alienated programmers, effectively creating an us vs them and de-valuing the work done on the portal and search. Work that could have made them competitive with Google Adwords that while only a year old was already starting to eat away at profits. But media. They bought a company called LaunchCast in 2001, charging a monthly fee to listen to music. Yahoo Music came before Spotify, Pandora, Apple Music, and even though it was the same year the iPod was released, they let us listen to up to 1,000 songs for free or pony up a few bucks a month to get rid of ads and allow for skips. A model that has been copied by many over the years. By then they knew that paid search was becoming a money-maker over at Google. Overture had actually been first to that market and so Yahoo! Bought them for $1.6 billion in 2003. But again, they didn’t integrate the team and in a classic “not built here” moment started Project Panama where they’d spend three years building their own search advertising platform. By the time that shipped the search war was over and executives and great programmers were flowing into other companies all over the world. And by then they were all over the world. 2005 saw them invest $1 billion in a little company called Alibaba. An investment that would accelerate Alibaba to become the crown jewel in Yahoo’s empire and as they dwindled away, a key aspect of what led to their final demise. They bought Flickr in 2005 for $25M. User generated content was a thing. And Flickr was almost what Instagram is today. Instead we’d have to wait until 2010 for Instagram because Flickr ended up yet another of the failed acquisitions. And here’s something wild to thin about - Stewart Butterfield and Cal Henderson started another company after they sold Flickr. Slack sold to Salesforce for over $27 billion. Not only is that a great team who could have turned Flickr into something truly special, but if they’d been retained and allowed to flourish at Yahoo! they could have continued building cooler stuff. Yikes. Additionally, Flickr was planning a pivot into social networking, right before a time when Facebook would take over that market. If fact, they tried to buy Facebook for just over a billion dollars in 2006. But Zuckerberg walked away when the price went down after the stock fell. They almost bought YouTube and considered buying Apple, which is wild to think about today. Missed opportunities. And Semmel was the first of many CEOs who lacked vision and the capacity to listen to the technologists - in a technology company. These years saw Comcast bring us weather.com, the rise of espn online taking eyeballs away from Yahoo! Sports, Gmail and other mail services reducing reliance on Yahoo! Mail. Facebook, LinkedIn, and other web properties rose to take ad placements away. Even though Yahoo Finance is still a great portal even sites like Bloomberg took eyeballs away from them. And then there was the rise of user generated content - a blog for pretty much everything. Jerry Yang came back to run the show in 2007 then Carol Bartz from 2009 to 2011 then Scott Thompson in 2012. None managed to turn things around after so much lost inertia - and make no mistake, inertia is the one thing that can’t be bought in this world. Wisconsin’s Marissa Mayer joined Yahoo! In 2012. She was Google’s 20th employee who’d risen through the ranks from writing code to leading teams to product manager to running web products and managing not only the layout of that famous homepage but also helped deliver Google AdWords and then maps. She had the pedigree and managerial experience - and had been involved in M&A. There was an immediate buzz that Yahoo! was back after years of steady decline due to incoherent strategies and mismanaged acquisitions. She pivoted the business more into mobile technology. She brought remote employees back into the office. She implemented a bell curve employee ranking system like Microsoft did during their lost decade. They bought Tumblr in 2013 for $1.1 billion. But key executives continued to leave - Tumbler’s value dropped, and the stock continued to drop. Profits were up, revenues were down. Investing in the rapidly growing China market became all the rage. The Alibaba investment was now worth more than Yahoo! itself. Half the shares had been sold back to Alibaba in 2012 to fund Yahoo! pursuing the Mayer initiatives. And then there was Yahoo Japan, which continued to do well. After years of attempts, activist investors finally got Yahoo! to spin off their holdings. They moved most of the shares to a holding company which would end up getting sold back to Alibaba for tens of billions of dollars. More missed opportunities for Yahoo! And so in the end, they would get merged with AOL - the two combined companies worth nearly half a trillion dollars at one point to become Oath in 2017. Mayer stepped down and the two sold for less than $5 billion dollars. A roller coaster that went up really fast and down really slow. An empire that crumbled and fragmented. Arguably, the end began in 1998 when another couple of grad students at Stanford approached Yahoo to buy Google for $1M. Not only did Filo tell them to try it alone but he also introduced them to Michael Moritz of Sequoia - the same guy who’d initially funded Yahoo!. That wasn’t where things really got screwed up though. It was early in a big change in how search would be monetized. But they got a second chance to buy Google in 2002. By then I’d switched to using Google and never looked back. But the CEO at the time, Terry Semel, was willing to put in $3B to buy Google - who decided to hold out for $5B. They are around a $1.8T company today. Again, the core product was selling advertising. And Microsoft tried to buy Yahoo! In 2008 for over 44 billion dollars to become Bing. Down from the $125 billion height of the market cap during the dot com bubble. And yet they eventually sold for less than four and a half billion in 2016 and went down in value from there. Growth stocks trade at high multiples but when revenues go down the crash is hard and fast. Yahoo! lost track of the core business - just as the model was changing. And yet never iterated it because it just made too much money. They were too big to pivot from banners when Google showed up with a smaller, more bite-sized advertising model that companies could grow into. Along the way, they tried to do too much. They invested over and over in acquisitions that didn’t work because they ran off the innovative founders in an increasingly corporate company that was actually trying to pretend not to be. We have to own who we are and become. And we have to understand that we don’t know anything about the customers of acquired companies and actually listen - and I mean really listen - when we’re being told what those customers want. After all, that’s why we paid for the company in the first place. We also have to avoid allowing the market to dictate a perceived growth mentality. Sure a growth stock needs to hit a certain number of revenue increase to stay considered a growth stock and thus enjoy the kind of multiples for market capitalization. But that can drive short term decisions that don’t see us investing in areas that don’t effectively manipulate stocks. Decisions like trying to keep eyeballs on pages with our own content rather than investing in the user generated content that drove the Web 2.0 revolution. The Internet can be a powerful medium to find information, allow humans to do more with less, and have more meaningful experiences in this life. But just as Yahoo! was engineering ways to keep eyeballs on their pages, the modern Web 2.0 era has engineered ways to keep eyeballs on our devices. And yet what people really want is those meaningful experiences, which happen more when we aren’t staring at our screens than when we are. As I look around at all the alerts on my phone and watch, I can’t help but wonder if another wave of technology is coming that disrupts that model. Some apps are engineered to help us lead healthier lifestyles and take a short digital detoxification break. Bush’s Memex in “As We May Think” was arguably an Apple taken from the tree of knowledge. If we aren’t careful, rather than the dream of computers helping humanity do more and free our minds to think more deeply we are simply left with less and less capacity to think and less and less meaning. The Memex came and Yahoo! helped connect us to any content we might want in the world. And yet, like so many others, they stalled in the phase they were at in that deterministic structure that technologies follow. Too slow to augment human labor with machine learning like Google did - but instead too quick to try and do everything for everyone with no real vision other than be everything to everyone. And so the cuts went on slowly for a long time, leaving employees constantly in fear of losing their jobs. As you listen to this if I were to leave a single parting thought - it would be that companies should always be willing to cannibalize their own businesses. And yet we have to have a vision that our teams rally behind for how that revenue gets replaced. We can’t fracture a company and just sprawl to become everything for everyone but instead need to be targeted and more precise. And to continue to innovate each product beyond the basic machine learning and into deep learning and beyond. And when we see those who lack that focus, don’t get annoyed but instead get stoked - that’s called a disruptive opportunity. And if there’s someone with 1,000 developers in a space, Nicholas Carlson in his book “Marissa Mayer and the Fight To Save Yahoo!” points out that one great developer is worth a thousand average ones. And even the best organizations can easily turn great developers into average ones for a variety of reason. Again, we can call these opportunities. Yahoo! helped legitimize the Internet. For that we owe them a huge thanks. And we can fast follow their adjacent expansions to find a slew of great and innovative ideas that increased the productivity of humankind. We owe them a huge thanks for that as well. Now what opportunities do we see out there to propel us further yet again?
8/20/202128 minutes, 15 seconds
Episode Artwork

The Innovations Of Bell Labs

What is the nature of innovation? Is it overhearing a conversation as with Morse and the telegraph? Working with the deaf as with Bell? Divine inspiration? Necessity? Science fiction? Or given that the answer to all of these is yes, is it really more the intersectionality between them and multiple basic and applied sciences with deeper understandings in each domain? Or is it being given the freedom to research? Or being directed to research? Few have as storied a history of innovation as Bell Labs and few have had anything close to the impact. Bell Labs gave us 9 Nobel Prizes and 5 Turing awards. Their alumni have even more, but those were the ones earned while at Bell. And along the way they gave us 26,000 patents. They researched, automated, and built systems that connected practically every human around the world - moving us all into an era of instant communication. It’s a rich history that goes back in time from the 2018 Ashkin Nobel for applied optical tweezers and 2018 Turing award for Deep Learning to an almost steampunk era of tophats and the dawn of the electrification of the world. Those late 1800s saw a flurry of applied and basic research. One reason was that governments were starting to fund that research. Alessandro Volta had come along and given us the battery and it was starting to change the world. So Napolean’s nephew, Napoleon III, during the second French Empire gave us the Volta Prize in 1852. One of those great researchers to receive the Volta Prize was Alexander Graham Bell. He invented the telephone in 1876 and was awarded the Volta Prize, getting 50,000 francs. He used the money to establish the Volta Laboratory, which would evolve or be a precursor to a research lab that would be called Bell Labs. He also formed the Bell Patent Association in 1876. They would research sound. Recording, transmission, and analysis - so science. There was a flurry of business happening in preparation to put a phone in every home in the world. We got the Bell System, The Bell Telephone Company, American Bell Telephone Company patent disputes with Elisha Gray over the telephone (and so the acquisition of Western Electric), and finally American Telephone and Telegraph, or AT&T. Think of all this as Ma’ Bell. Not Pa’ Bell mind you - as Graham Bell gave all of his shares except 10 to his new wife when they were married in 1877. And her dad ended up helping build the company and later creating National Geographic, even going international with International Bell Telephone Company. Bell’s assistant Thomas Watson sold his shares off to become a millionaire in the 1800s, and embarking on a life as a Shakespearean actor. But Bell wasn’t done contributing. He still wanted to research all the things. Hackers gotta’ hack. And the company needed him to - keep in mind, they were a cutting edge technology company (then as in now). That thirst for research would infuse AT&T - with Bell Labs paying homage to the founder’s contribution to the modern day. Over the years they’d be on West Street in New York and expand to have locations around the US. Think about this: it was becoming clear that automation would be able to replace human efforts where electricity is concerned. The next few decades gave us the vacuum tube, flip flop circuits, mass deployment of radio. The world was becoming ever so slightly interconnected. And Bell Labs was researching all of it. From physics to the applied sciences. By the 1920s, they were doing sound synchronized with motion and shooting that over long distances and calculating the noise loss. They were researching encryption. Because people wanted their calls to be private. That began with things like one-time pad cyphers but would evolve into speech synthesizers and even SIGSALY, the first encrypted (or scrambled) speech transmission that led to the invention of the first computer modem. They had engineers like Harry Nyquist, whose name is on dozens of theories, frequencies, even noise. He arrived in 1917 and stayed until he retired in 1954. One of his most important contributions was to move beyond printing telegraph to paper tape and to helping transmit pictures over electricity - and Herbert Ives from there sent color photos, thus the fax was born (although it would be Xerox who commercialized the modern fax machine in the 1960s). Nyquist and others like Ralph Hartley worked on making audio better, able to transmit over longer lines, reducing feedback, or noise. While there, Hartley gave us the oscillator, developed radio receivers, parametric amplifiers, and then got into servomechanisms before retiring from Bell Labs in 1950. The scientists who’d been in their prime between the two world wars were titans and left behind commercializable products, even if they didn’t necessarily always mean to. By the 40s a new generation was there and building on the shoulders of these giants. Nyquist’s work was extended by Claude Shannon, who we devoted an entire episode to. He did a lot of mathematical analysis like writing “A Mathematical Theory of Communication” to birth Information Theory as a science. They were researching radio because secretly I think they all knew those leased lines would some day become 5G. But also because the tech giants of the era included radio and many could see a day coming when radio, telephony, and aThey were researching how electrons diffracted, leading to George Paget Thomson receiving the Nobel Prize and beginning the race for solid state storage. Much of the work being done was statistical in nature. And they had William Edwards Deming there, whose work on statistical analysis when he was in Japan following World War II inspired a global quality movement that continues to this day in the form of frameworks like Six Sigma and TQM. Imagine a time when Japanese manufacturing was of such low quality that he couldn’t stay on a phone call for a few minutes or use a product for a time. His work in Japan’s reconstruction paired with dedicated founders like Akio Morita, who co-founded Sony, led to one of the greatest productivity increases, without sacrificing quality, of any time in the world. Deming would change the way Ford worked, giving us the “quality culture.” Their scientists had built mechanical calculators going back to the 30s (Shannon had built a differential analyzer while still at MIT) - first for calculating the numbers they needed to science better then for ballistic trajectories, then with the Model V in 1946, general computing. But these were slow; electromechanical at best. Mary Torrey was another statistician of the era who along with Harold Hodge gave us the theory of acceptance sampling and thus quality control for electronics. And basic electronics research to do flip-flop circuits fast enough to establish a call across a number of different relays was where much of this was leading. We couldn’t use mechanical computers for that, and tubes were too slow. And so in 1947 John Bardeen, Walter Brattain, and William Shockley invented the transistor at Bell Labs, which be paired with Shannon’s work to give us the early era of computers as we began to weave Boolean logic in ways that allowed us to skip moving parts and move to a purely transistorized world of computing. In fact, they all knew one day soon, everything that monster ENIAC and its bastard stepchild UNIVAC was doing would be done on a single wafer of silicon. But there was more basic research to get there. The types of wires we could use, the Marnaugh map from Maurice Karnaugh, zone melting so we could do level doping. And by 1959 Mohamed Atalla and Dawon Kahng gave us metal-oxide semiconductor field-effect transistors, or MOSFETs - which was a step on the way to large-scale integration, or LSI chips. Oh, and they’d started selling those computer modems as the Bell 101 after perfecting the tech for the SAGE air-defense system. And the research to get there gave us the basic science for the solar cell, electronic music, and lasers - just in the 1950s. The 1960s saw further work work on microphones and communication satellites like Telstar, which saw Bell Labs outsource launching satellites to NASA. Those transistors were coming in handy, as were the solar panels. The 14 watts produced certainly couldn’t have moved a mechanical computer wheel. Blaise Pascal and would be proud of the research his countries funds inspired and Volta would have been perfectly happy to have his name still on the lab I’m sure. Again, shoulders and giants. Telstar relayed its first television signal in 1962. The era of satellites was born later that year when Cronkite televised coverage of Kennedy manipulating world markets on this new medium for the first time and IBM 1401 computers encrypted and decrypted messages, ushering in an era of encrypted satellite communications. Sputnik may heave heated the US into orbit but the Telstar program has been an enduring system through to the Telstar 19V launched in 2018 - now outsourced to a Falcon 9 rocket from Space X. It might seem like Bell Labs had done enough for the world. But they still had a lot of the basic wireless research to bring us into the cellular age. In fact, they’d plotted out what the cellular age would look like all the way back in 1947! The increasing use of computers to do the all the acoustics and physics meant they were working closely with research universities during the rise of computing. They were involved in a failed experiment to create an operating system in the late 60s. Multics influenced so much but wasn’t what we might consider a commercial success. It was the result of yet another of DARPA’s J.C.R. Licklider’s wild ideas in the form of Project MAC, which had Marvin Minsky and John McCarthy. Big names in the scientific community collided with cooperation and GE, Bell Labs and Multics would end up inspiring many a feature of a modern operating system. The crew at Bell Labs knew they could do better and so set out to take the best of Multics and implement a lighter, easier operating system. So they got to work on Uniplexed Information and Computing Service, or Unics, which was a pun on Multics. Ken Thompson, Dennis Ritchie, Doug McIllroy, Joe Assana, Brian Kernigan, and many others wrote Unix originally in assembly and then rewrote it in C once Dennis Ritchie wrote that to replace B. Along the way, Alfred Aho, Peter Weinber, and Kernighan gave us AWSK and with all this code they needed a way to keep the source under control so Marc Rochkind gave us the SCCS, or Course Code Control System, first written for an IBM S/3370 and then ported to C - which would be how most environments maintained source code until CVS came along in 1986. And Robert Fourer, David Gay, and Brian Kernighan wrote A Mathematical Programming Language, or AMPL, while there. Unix began as a bit of a shadow project but would eventually go to market as Research Unix when Don Gillies left Bell to go to the University of Illinois at Champaign-Urbana. From there it spread and after it fragmented in System V led to the rise of IBM’s AIX, HP-UX, SunOS/Solaris, BSD, and many other variants - including those that have evolved into the macOS through Darwin, and Android through Linux. But Unix wasn’t all they worked on - it was a tool to enable other projects. They gave us the charge-coupled device, which resulted in yet another Nobel Prize. That is an image sensor built on the MOS technologies. While fiber optics goes back to the 1800s, they gave us attenuation over fiber and thus could stretch cables to only need repeaters every few dozen miles - again reducing the cost to run the ever-growing phone company. All of this electronics allowed them to finally start reducing their reliance on electromechanical and human-based relays to transistor-to-transistor logic and less mechanical meant less energy, less labor to repair, and faster service. Decades of innovation gave way to decades of profit - in part because of automation. The 5ESS was a switching system that went online in 1982 and some of what it did - its descendants still do today. Long distance billing, switching modules, digital line trunk units, line cards - the grid could run with less infrastructure because the computer managed distributed switching. The world was ready for packet switching. 5ESS was 100 million lines of code, mostly written in C. All that source was managed with SCCS. Bell continued with innovations. They produced that modem up into the 70s but allowed Hayes, Rockewell, and others to take it to a larger market - coming back in from time to time to help improve things like when Bell Labs, branded as Lucent after the breakup of AT&T, helped bring the 56k modem to market. The presidents of Bell Labs were as integral to the success and innovation as the researchers. Frank Baldwin Jewett from 1925 to 1940, Oliver Buckley from 40 to 51, the great Mervin Kelly from 51 to 59, James Fisk from 59 to 73, William Oliver Baker from 73 to 79, and a few others since gave people like Bishnu Atal the space to develop speech processing algorithms and predictive coding and thus codecs. And they let Bjarne Stroustrup create C++, and Eric Schmidt who would go on to become a CEO of Google and the list goes on. Nearly every aspect of technology today is touched by the work they did. All of this research. Jon Gerstner wrote a book called The Idea Factory: Bell Labs and the Great Age of American Innovation. He chronicles the journey of multiple generations of adventurers from Germany, Ohio, Iowa, Japan, and all over the world to the Bell campuses. The growth and contraction of the basic and applied research and the amazing minds that walked the halls. It’s a great book and a short episode like this couldn’t touch the aspects he covers. He doesn’t end the book as hopeful as I remain about the future of technology, though. But since he wrote the book, plenty has happened. After the hangover from the breakup of Ma Bell they’re now back to being called Nokia Bell Labs - following a $16.6 billion acquisition by Nokia. I sometimes wonder if the world has the stomach for the same level of basic research. And then Alfred Aho and Jeffrey Ullman from Bell end up sharing the Turing Award for their work on compilers. And other researchers hit a terabit a second speeds. A storied history that will be a challenge for Marcus Weldon’s successor. He was there as a post-doc there in 1995 and rose to lead the labs and become the CTO of Nokia - he said the next regeneration of a Doctor Who doctor would come in after him. We hope they are as good of stewards as those who came before them. The world is looking around after these decades of getting used to the technology they helped give us. We’re used to constant change. We’re accustomed to speed increases from 110 bits a second to now terabits. The nature of innovation isn’t likely to be something their scientists can uncover. My guess is Prometheus is guarding that secret - if only to keep others from suffering the same fate after giving us the fire that sparked our imaginations. For more on that, maybe check out Hesiod’s Theogony. In the meantime, think about the places where various sciences and disciplines intersect and think about the wellspring of each and the vast supporting casts that gave us our modern life. It’s pretty phenomenal when ya’ think about it.
8/15/202122 minutes, 18 seconds
Episode Artwork

VisiCalc, Excel, and The Rise Of The Spreadsheet

Once upon a time, people were computers. It’s probably hard to imagine teams of people spending their entire day toiling in large grids of paper, writing numbers and calculating numbers by hand or with mechanical calculators, and then writing more numbers and then repeating that. But that’s the way it was before the 1979.  The term spreadsheet comes from back when a spread, like a magazine spread, of ledger cells for bookkeeping. There’s a great scene in the Netflix show Halston where a new guy is brought in to run the company and he’s flying through an electro-mechanical calculator. Halston just shuts the door. Ugh. Imagine doing what we do in a spreadsheet in minutes today by hand. Even really large companies jump over into a spreadsheet to do financial projections today - and with trendlines, tweaking this small variable or that, and even having different algorithms to project the future contents of a cell - the computerized spreadsheet is one of the most valuable business tools ever built. It’s that instant change we see when we change one set of numbers and can see the impact down the line.  Even with the advent of mainframe computers accounting and finance teams had armies of people who calculated spreadsheets by hand, building complicated financial projections. If the formulas changed then it could take days or weeks to re-calculate and update every cell in a workbook. People didn’t experiment with formulas. Computers up to this point had been able to calculate changes and provided all the formulas were accurate could output results onto punch cards or printers. But the cost had been in the millions before Digital Equipment and Data Nova came along and had dropped into the tens or hundreds of thousands of dollars  The first computerized spreadsheets weren’t instant. Richard Mattessich developed an electronic, batch spreadsheet in 1961. He’d go on to write a book called “Simulation of the Firm Through a Budget Computer Program.” His work was more theoretical in nature, but IBM developed the Business Computer Language, or BCL the next year. What IBM did got copied by their seven dwarves. former GE employees Leroy Ellison, Harry Cantrell, and Russell Edwards developed AutoPlan/AutoTab, another scripting language for spreadsheets, following along delimited files of numbers. And in 1970 we got LANPAR which opened up more than reading files in from sequential, delimited sources. But then everything began to change. Harvard student Dan Bricklin graduated from MIT and went to work for Digital Equipment Corporation to work on an early word processor called WPS-8. We were now in the age of interactive computing on minicomputers. He then went to work for FasFax in 1976 for a year, getting exposure to calculating numbers. And then he went off to Harvard in 1977 to get his MBA. But while he was at Harvard he started working on one of the timesharing programs to help do spreadsheet analysis and wrote his own tool that could do five columns and 20 rows. Then he met Bob Frankston and they added Dan Fylstra, who thought it should be able to run on an Apple - and so they started Software Arts Corporation. Frankston got the programming bug while sitting in on a class during junior high. He then got his undergrad and Masters at MIT, where he spent 9 years in school and working on a number of projects with CSAIL, including Multics. He’d been consulting and working at various companies for awhile in the Boston area, which at the time was probably the major hub. Frankston and Bricklin would build a visible calculator using 16k of space and that could fit on a floppy. They used a time sharing system and because they were paying for time, they worked at nights when time was cheaper, to save money. They founded a company called Software Arts and named their Visual Calculator VisiCalc. Along comes the Apple II. And computers were affordable. They ported the software to the platform and it was an instant success. It grew fast. Competitors sprung up. SuperCalc in 1980, bundled with the Osborne. The IBM PC came in 1981 and the spreadsheet appeared in Fortune for the first time. Then the cover of Inc Magazine in 1982. Publicity is great for sales and inspiring competitors. Lotus 1-2-3 came in 1982 and even Boeing Computer Services got in the game with Boeing Calc in 1985. They extended the ledger metaphor to add sheets to the spreadsheet, which we think of as tabs today. Quattro Pro from Borland copied that feature and despite having their offices effectively destroyed during an earthquake just before release, came to market in 1989. Ironically they got the idea after someone falsely claimed they were making a spreadsheet a few years earlier. And so other companies were building Visible Calculators and adding new features to improve on the spreadsheet concept. Microsoft was one who really didn’t make a dent in sales at first. They released an early spreadsheet tool called Multiple in 1982. But Lotus 1-2-3 was the first killer application for the PC.  It was more user friendly and didn’t have all the bugs that had come up in VisiCalc as it was ported to run on platform after platform. Lotus was started by Mitch Kapor who brought Jonathan Sachs in to develop the spreadsheet software. Kapor’s marketing prowess would effectively obsolete VisiCalc in a number of environments. They made TV commercials so you know they were big time! And they were written natively in the x86 assembly so it was fast. They added the ability to add bar charts, pie charts, and line charts. They added color and printing. One could even spread their sheet across multiple monitors like in a magazine. It was 1- spreadsheets, 2 - charts and graphs and 3 - basic database functions. Heck, one could even change the size of cells and use it as a text editor. Oh, and macros would become a standard in spreadsheets after Lotus. And because VisiCalc had been around so long, Lotus of course was immediately capable of reading a VisiCalc file when released in 1983. As could Microsoft Excel, when it came along in 1985. And even Boeing Calc could read Lotus 1-2-3 files. After all, the concept went back to those mainframe delimited files and to this day we can import and export to tab or comma delimited files. VisiCalc had sold about a million copies but that would cease production the same year Excel was released, although the final release had come in 1983. Lotus had eaten their shorts in the market, and Borland had watched. Microsoft was about to eat both of theirs. Why? Visi was about to build a windowing system called Visi-On. And Steve Jobs needed a different vendor to turn to. He looked to Lotus who built a tool called Jazz that was too basic. But Microsoft had gone public in 1985 and raised plenty of money, some of which they used to complete Excel for the Mac that year. Their final release in 1983 began to fade away And so Excel began on the Mac and that first version was the first graphical spreadsheet. The other developers didn’t think that a GUI was gonna’ be much of a thing. Maybe graphical interfaces were a novelty! Version two was released for the PC in 1987 along with Windows 2.0. Sales were slow at first. But then came Windows 3. Add Microsoft Word to form Microsoft Office and by the time Windows 95 was released Microsoft became the de facto market leader in documents and spreadsheets. That’s the same year IBM bought Lotus and they continued to sell the product until 2013, with sales steadily declining. And so without a lot of competition for Microsoft Excel, spreadsheets kinda’ sat for a hot minute. Computers became ubiquitous. Microsoft released new versions for Mac and Windows but they went into that infamous lost decade until… competition. And there were always competitors, but real competition with something new to add to the mix. Google bought a company called 2Web Technologies in 2006, who made a web-based spreadsheet called XL2WEB. That would become Google Sheets. Google bought DocVerse in 2010 and we could suddenly have multiple people editing a sheet concurrently - and the files were compatible with Excel. By 2015 there were a couple million users of Google Workspace, growing to over 5 million in 2019 and another million in 2020. In the years since, Microsoft released Office 365, starting to move many of their offerings onto the web. That involved 60 million people in 2015 and has since grown to over 250 million. The statistics can be funny here, because it’s hard to nail down how many free vs paid Google and Microsoft users there are. Statista lists Google as having a nearly 60% market share but Microsoft is clearly making more from their products. And there are smaller competitors all over the place taking on lots of niche areas. There are a few interesting tidbits here. One is that the tools that there’s a clean line of evolution in features. Each new tool worked better, added features, and they all worked with previous file formats to ease the transition into their product. Another is how much we’ve all matured in our understanding of data structures. I mean we have rows and columns. And sometimes multiple sheets - kinda’ like multiple tables in a database. Our financial modeling and even scientific modeling has grown in acumen by leaps and bounds.  Many still used those electro-mechanical calculators in the 70s when you could buy calculator kits and build your own calculator. Those personal computers that flowed out in the next few years gave every business the chance to first track basic inventory and calculate simple information, like how much we might expect in revenue from inventory in stock to now thousands of pre-built formulas that are supported across most spreadsheet tooling. Despite expensive tools and apps to do specific business functions, the spreadsheet is still one of the most enduring and useful tools we have. Even for programmers, where we’re often just getting our data in a format we can dump into other tools! So think about this. What tools out there have common file types where new tools can sit on top of them? Which of those haven’t been innovated on in a hot minute? And of course, what is that next bold evolution? Is it moving the spreadsheet from a book to a batch process? Or from a batch process to real-time? Or from real-time to relational with new tabs? Or to add a GUI? Or adding online collaboration? Or like some big data companies using machine learning to analyze the large data sets and look for patterns automatically?  Not only does the spreadsheet help us do the maths - it also helps us map the technological determinism we see repeated through nearly every single tool for any vertical or horizontal market. Those stuck need disruptive competitors if only to push them off the laurels they’ve been resting on. 
8/8/202117 minutes, 2 seconds
Episode Artwork

Microsoft's Lost Decade

Microsoft went from a fledgeling purveyor of a BASIC for the Altair to a force to be reckoned with. The biggest growth hack was when they teamed up with IBM to usher in the rise of the personal computer. They released apps and an operating system and by licensing DOS to anyone (not just IBM) and then becoming the dominant OS they allowed clone makers to rise and thus broke the hold IBM had on the computing industry since the days the big 8 mainframe companies were referred to as “Snow White and the Seven Dwarfs.” They were young and bold and grew fast. They were aggressive, taking on industry leaders in different segments, effectively putting CP/M out of business, taking out Lotus, VisiCalc, Novell, Netscape, `and many, many other companies.   Windows 95 and Microsoft Office helped the personal computer become ubiquitous in homes and offices. The team knew about the technical debt they were accruing in order to grow fast. So they began work on projects that would become Windows NT and that kernel would evolve into Windows 2000, phasing out the legacy operating systems. They released Windows Server, Microsoft Exchange, Flight Simulators, maps, and seemed for a time to be poised to take over the world. They even seemed to be about to conquer the weird new smart phone world. And then something strange happened. They entered into what we can now call a lost decade. Actually there’s nothing strange about it. This happens to nearly every company. Innovation dropped off. Releases of Windows got buggy. The market share of their mobile operating system fell away. Apple and Android basically took the market away entirely. They let Google take the search market and after they failed to buy Yahoo! they released an uninspired Bing. The MSN subscriptions that once competed with AOL fell away. Google Docs came and was a breath of fresh air. Windows Servers started moving into cloud solutions where Box or Dropbox were replacing filers and Sharepoint became a difficult story to tell.  They copied features from other companies. But were followers - not leaders. And the stock barely moved for a decade, while Apple more than doubled the market cap of Microsoft for a time. What exactly happened here? Some have blamed Steve Ballmer, who replaced Bill Gates who had led the company since 1975 and if we want to include Traf-O-Data - since 1972. They grew fast and by Y2K there were memes about how rich Bill Gates was. Then a lot changed over the next decade. Windows XP was released in 2001, the same year the first Xbox was released. They launched the Windows Mobile operating system in 2003, planning to continue the whole “rule the operating system” approach. Vista comes along in 2007. Bill Gates retires in 2008. Later that year, Google launches Chrome - which would eat market share away from Microsoft over time. Windows 7 launches in 2009. Microsoft releases Bing in 2009 and Azure in 2010. The Windows phone comes in 2010 as well, and they would buy Skype for $8.5 billion dollars the next year. The tablet Microsoft Surface coming in 2012, the same year the iPad was released. And yet, there were market forces operating to work against what Microsoft was doing. Google had come roaring out of the dot com bubble bursting and proved how money could be made with search. Yahoo! was slow to respond. As Google’s aspirations became clear by 2008, Ballmer moved to buy them for $20 billion eventually growing the bid to nearly $45 billion - a move that was thwarted but helped to take the attention of the Yahoo! team away from the idea of making money.  That was the same year Android and Chrome was released. Meanwhile, Apple released the iPhone in 2007 and were shipping the 3G in 2008, taking the mobile market by storm. By 2010, slow sales of the Windows phone were already spelling the end for Ballmer.  Microsoft had launched Windows CE in 1996, held the smaller Handheld PC market for a time. They took over and owned the operating system market for personal computers and productivity software. They were able to seize a weakened and lumbering IBM to do so.  And yet they turned into that lumbering juggernaut of a company. All those products and all the revenues being generated, Microsoft looked unstoppable by the end of the millennium. Then they got big. Like really big. And organizations can be big and stay lean - but they weren’t.  Leaders fought leaders, programmers fled, and the fiefdoms caused them to be slow to jump into new opportunities. Bill Gates had been an intense leader - but the Department of Justice filed an anti-trust case against Microsoft and between that and just managing hyper-growth along the way they lost a focus on customers and instead focused inward. And so by all accounts, the lost decade began in 2001. Vista was supposed to ship in 2003 but pushed all the way back to 2007. Bing was a dud, losing billions out of the gate. By 2011 Google released Chrome OS - an operating system that was basically a web browser bootstrapped on Linux and effectively what Netscape founder Marc Andreesen foreshadowed in a Time piece in the early days of the browser wars. Kurt Eichenwald of Vanity Fair wrote an article called MICROSOFT’S LOST DECADE in 2012, looking at what led to the lost decade. He pointed out the arrogance and the products that, even though they were initially developed at Microsoft, would be launched by others first. It was Bill Gates who turned down releasing the ebook, which would evolve into the tablet. The article explained that moving timelines around pushed developing new products back in the list of priorities. The Windows and Office divisions were making so much money for the company that they had all the power to make the decisions - even when the industry was moving in another direction.  The original employees got rich when the company went public and much of the spunk left with them. The focus shifted to pushing up the stock price. Ballmer is infamously not a product guy and he became the president of the company in 1998 and moved to CEO in 2000. But Gates stayed on in product. As we see with companies when their stock price starts to fall, the finger pointing begins. Cost cutting begins. The more talented developers can work anywhere - and so companies like Amazon, Google, and Apple were able to fill their ranks with great developers.  When organizations in a larger organization argue, new bureaucracies get formed. Those slow things down by replacing common sense with process. That is good to a point. Like really good to a point. Measure twice, cut once. Maybe even measure three times and cut once. But software doesn’t get built by committees, it gets built by humans. The closer engineers are to humans the more empathy will go into the code. We can almost feel it when we use tools that developers don’t fully understand. And further, developers write less code when they’re in more meetings. Some are good but when there are tiers of supervisors and managers and directors and VPs and Jr and Sr of each, their need to justify their existence leads to more meetings. The Vanity Fair piece also points out that times changed. He called the earlier employees “young hotshots from the 1980s” who by then were later career professionals and as personal computers became pervasive the way people use them changed. And a generation of people who grew up with computers now interacted with them differently. People were increasingly always online. Managers who don’t understand their users need to release control of products to those who do.  They made the Zune 5 years after the iPod was released and had lit a fire at Apple. Less than two months later, Apple released the iPhone and the Zune was dead in the water, never eclipsing over 5 percent of the market and finally being discontinued in 2012. Ballmer had predicted that all of these Apple products would fail and in a quote from a source in the Vanity Fair article, a former manager at Microsoft said “he is hopelessly out of touch with reality or not listening to the tech staff around him”. One aspect the article doesn’t go into is the sheer number of products Microsoft was producing. They were competing with practically every big name in technology, from Apple to Oracle to Google to Facebook to Amazon to Salesforce. They’d gobbled up so many companies to compete in so many spaces that it was hard to say what Microsoft really was - and yet the Windows and Office divisions made the lions’ share of the money. They thought they needed to own every part of the ecosystem when Apple went a different route and opened a store to entice developers to go direct to market, making more margin with no acquisition cost to build a great ecosystem.  The Vanity Fair piece ends with a cue from the Steve Jobs biography and to sum it up, Jobs said that Microsoft ended up being run by sales people because they moved the revenue needle - just as he watched it happen with Sculley at Apple. Jobs went on to say Microsoft would continue the course as long as Ballmer was at the helm. Back when they couldn’t ship Vista they were a 60,000 person company. By 2011 when the Steve Jobs biography was published, they were at 90,000 and had just rebounded from layoffs. By the end of 2012, the iPhone had overtaken Microsoft in sales. Steve Ballmer left as the CEO of Microsoft in 2014 and Satya Nadella replaces him. Under his leadership, half the company would be moved into research later that year. Nadella wrote a book about his experience turning things around called Hit Refresh. Just as the book Microsoft Rebooted told the story of how Ballmer was going to turn things around in 2004 - except Hit Refresh was actually a pretty good book. And the things seemed to work. The stock price had risen a little in 2014 but since then it’s shot up six times what it was. And all of the pivots to a more cloud-oriented company and many other moves seem to have been started under Ballmer’s regime, just as the bloated company they became started under the Gates regime. Each arguably did what was needed at the time. Let’s not forget the dot com bubble burst at the beginning of the Ballmer era and he had the 2008 financial crises. There be dragons that are micro-economic forces outside anyones control.  But Nadella ran R&D and cloud offerings. He emphasized research - which means innovation. He changed the mission statement to “empower every person and every organization on the planet to achieve more.” He laid out a few strategies, to reinvent productivity and collaboration, power those with Microsoft’s cloud platform, and expand on Windows and gaming. And all of those things have been gangbusters ever since. They bought Mojang in 2014 and so are now the makers of Minecraft. They bought LinkedIn. They finally got Skype better integrated with the company so Teams could compete more effectively with Slack. Here’s the thing. I knew a lot of people who worked, and many who still work at Microsoft during that Lost Decade. And I think every one of them is really just top-notch. Looking at things as they’re unfolding you just see a weekly “patch Tuesday” increment. Everyone wanted to innovate - wanted to be their best self. And across every company we look at in this podcast, nearly every one has had to go through a phase of a lost few years or lost decade. The ones who don’t pull through can never turn the tide on culture and innovation. The two are linked. A bloated company with more layers of management inspires a sense of controlling managers who stifle innovation. At face value, the micro-aggressions seem plausible, especially to those younger in their career. We hear phrases like “we need to justify or analyze the market for each expense/initiative” and that’s true or you become a Xerox PARC or Digital Research where so many innovations never get to market effectively. We hear phrases like “we’re too big to do things like that any more” and yup, people running amuck can be dangerous - turns out move fast and break things doesn’t always work out.  We hear “that requires approval” or “I’m their bosses bosses boss” or “you need to be a team player and run this by other leaders” or “we need more process” or “we need a center of excellence for that because too many teams are doing their own thing” or “we need to have routine meetings about this” or “how does that connect to the corporate strategy” or “we’re a public company now so no” or “we don’t have the resources to think about moon shots” or “we need a new committee for that” or “who said you could do that” and all of these taken as isolated comments would be fine here or there. But the aggregate of so many micro-aggressions comes from a place of control, often stemming from fear of change or being left behind and they come at the cost of innovation.  Charles Simonyi didn’t leave Xerox PARC and go to Microsoft to write Microsoft Word to become a cog in a wheel that’s focused on revenue and not changing the world. Microsoft simply got out-innovated due to being crushed under the weight of too many layers of management and so overly exerting control over those capable of building cool stuff. I’ve watched those who stayed be allowed speak publicly again, engage with communities, take feedback, be humble, admit mistakes, and humanize the company. It’s a privilege to get to work with them and I’ve seen results like a change to a graphAPI endpoint one night when I needed a new piece of data.  They aren’t running amuck. They are precise, targeted, and allowed to do what needs to be done. And it’s amazing how a chief molds the way a senior leadership team acts and they mold the way directors direct and they mold the way managers manage and down the line. An aspect of culture is a mission - another is values - and another is behaviors, which make up the culture. And these days I gotta’ say I’m glad to have witnessed a turnaround like they’ve had and every time I talk to a leader or an individual contributor at Microsoft I’m glad to feel their culture coming through. So here’s where I’d like to leave this. We can all help shape a great culture. Leaders aren’t the only ones who have an impact. We can all innovate. An innovative company isn’t one that builds a great innovative product (although that helps) but instead one who becomes an unstoppable force due to lots of small innovations at every level of the organization. Where are we allowing politics or a need for control and over-centralization stifle others? Let’s change that.
8/4/202121 minutes, 38 seconds
Episode Artwork

Babbage to Bush: An Unbroken Line Of Computing

The amount published in scientific journals has exploded over the past few hundred years. This helps in putting together a history of how various sciences evolved. And sometimes helps us revisit areas for improvement - or predict what’s on the horizon. The rise of computers often begins with stories of Babbage. As we’ve covered a lot came before him and those of the era were often looking to automate calculating increasingly complex mathematic tables. Charles Babbage was a true Victorian era polymath. A lot was happening as the world awoke to a more scientific era and scientific publications grew in number and size. Born in London, Babbage loved math from an early age and went away to Trinity College in Cambridge in 1810. There he helped form the Analytical Society with John Herschel - a pioneer of early photography and a chemist and invented of the blueprint. And George Peacock, who established the British arm of algebraic logic, which when picked up by George Boole would go on to form part of Boolean algebra, ushering in the idea that everything can be reduced to a zero or a one. Babbage graduated from Cambridge and went on to become a Fellow of the Royal Society and helped found the Royal Astronomical Society. He published works with Herschel on electrodynamics that went on to be used by Michael Faraday later and even dabbled in actuarial tables - possibly to create a data driven insurance company. His father passed away in 1827, leaving him a sizable estate. And after applying multiple times he finally became a professor at Cambridge in 1828. He and the others from the Analytical Society were tinkering with things like generalized polynomials and what we think of today as a formal power series, all of which an be incredibly tedious and time consuming. Because it’s iterative. Pascal and Leibnitz had pushed math forward and had worked on the engineering to automate various tasks, applying some of their science. This gave us Pascal’s calculator and Leibnitz’s work on information theory and his calculus ratiocinator added a stepped reckoner, now called the Leibniz wheel where he was able to perform all four basic arithmetic operations.  Meanwhile, Babbage continued to bounce around between society, politics, science, mathematics, and even coining a book on manufacturing where he looked at rational design and profit sharing. He also looked at how tasks were handled and made observations about the skill level of each task and the human capital involved in carrying them out. Marx even picked up where Babbage left off and looked further into profitability as a motivator. He also invented the pilot for trains and was involved with lots of learned people of the day. Yet Babbage is best known for being the old, crusty gramps of the computer. Or more specifically the difference engine, which is different from a differential analyzer. A difference engine was a mechanical calculator that could perform polynomial functions. A differential analyzer on the other hand solves differential equations using wheels and disks.  Babbage expanded on the ideas of Pascal and Leibniz and added to mechanical computing, making the difference engine, the inspiration of many a steampunk work of fiction. Babbage started work on the difference engine in 1819. Multiple engineers built different components for the engine and it was powered by a crank that spun a series of wheels, not unlike various clockworks available at the time. The project was paid for by the British Government who hoped it could save time calculating complex tables. Imagine doing all the work in spreadsheets manually. Each cell could take a fair amount of time and any mistake could be disastrous.  But it was just a little before its time. The plans have been built and worked and while he did produce a prototype capable of raising numbers to the third power and perform some quadratic equations the project was abandoned in 1833. We’ll talk about precision in a future episode. Again, the math involved in solving differential equations at the time was considerable and the time-intensive nature was holding back progress. So Babbage wasn’t the only one working on such ideas. Gaspard-Gustave de Coriolis, known for the Coriolis effect, was studying the collisions of spheres and became a professor of mechanics in Paris. To aid in his works, he designed the first mechanical device to integrate differential equations in 1836.  After Babbage scrapped his first, he moved on to the analytical engine, adding conditional branching, loops, and memory  - and further complicating the machine. The engine borrowed the punchcard tech from the Jacquard loom and applied that same logic, along with the work of Leibniz, to math. The inputs would be formulas, much as Turing later described when concocting some of what we now call Artificial Intelligence. Essentially all problems could be solved given a formula and the output would be a printer. The analytical machine had 1,000 numbers worth of memory and a logic processor or arithmetic unit that he called a mill, which we’d call a CPU today. He even planned on a programming language which we might think of as assembly today. All of this brings us to the fact that while never built, it would have been a Turing-complete in that the simulation of those formulas was a Turing machine.  Ada Lovelace contributed the concept of Bernoulli numbers in algorithms giving us a glimpse into what an open source collaboration might some day look like. And she was in many ways the first programmer - and daughter of Lord Byron and Anne Millbanke, a math whiz. She became fascinated with the engine and ended up becoming an expert at creating a set of instructions to punch on cards, thus the first programmer of the analytical engine and far before her time. In fact, there would be no programmer for 100 years with her depth of understanding. Not to make you feel inadequate, but she was 27 in 1843. Luigi Menabrea took the idea to France. And yet by the time Babbage died in 1871 without a working model.  During those years, Per Georg Scheutz built a number of difference engines based on Babbage’s published works - also funded by the government and would evolve to become the first calculator that could print. Martin Wiberg picked up from there and was able to move to 20 digit processing. George Grant at Harvard developed calculating machines and published his designs by 1876, starting a number of companies to fabricate gears along the way.  James Thomson built a differential analyzer in 1876 to predict tides. And that’s when his work on fluid dynamics and other technology seemed to be the connection between these machines and the military. Thomson’s work would Joe added to work done by Arthur Pollen and we got our first automated fire-control systems.  Percy Ludgate and Leonardo Torres wrote about Babbages work in the early years the 1900s and other branches of math needed other types of mechanical computing. Burroughs built a difference engine in 1912 and another in 1929. The differential analyzer was picked up by a number of scientists in those early years. But Vaneevar Bush was perhaps one of the most important. He, with Harold Locke Hazen built one at MIT and published an article on it in 1931. Here’s where everything changes. The information was out there in academic journals. Bush published another in 1936 connecting his work to Babbage’s. Bush’s designs get used by a number of universities and picked up by the the Balistic Research Lab in the US. One of those installations was in the same basement ENIAC would be built in. Bush did more than inspire other mathematicians. Sometimes he paid them. His research assistant was Claude Shannon, who built the General Purpose Analog Computer in 1941 and went on to become founder of the whole concept of information theory, down to the bits to bytes. Shannon’s computer was important as it came shortly after Alan Turing’s work on Turing machines and so has been seen as a means to get to this concept of general, programmable computing - basically revisiting the Babbage concept of a thinking, or analytical machine. And Howard Aiken went a step further than mechanical computing and into electromechanical computing with he Mark I, where he referenced Babbage’s work as well. Then we got the Atanasoff-Berry Computer in 1942. By then, our friend Bush had gone on to chair the National Defense Research Committee where he would serve under Roosevelt and Truman and help develop radar and the Manhattan Project as an administrator where he helped coordinate over 5,000 research scientists. Some helped with ENIAC, which was completed in 1945, thus beginning the era of programmable, digital, general purpose computers. Seeing how computers helped break Enigma machine encryption and solve the equations, blow up targets better, and solve problems that held science back was one thing - but  unleashing such massive and instantaneous violence as the nuclear bomb caused Bush to write an article for The Atlantic called As We May Think, that inspired generations of computer scientists. Here he laid out the concept of a Memex, or a general purpose computer that every knowledge worker could have. And thus began the era of computing.  What we wanted to look at in this episode is how Babbage wasn’t an anomaly. Just as Konrad Zuse wasn’t. People published works, added to the works they read about, cited works, pulled in concepts from other fields, and we have unbroken chains in our understanding of how science evolves. Some,  like Konrad Zuse, might have been operating outside of this peer reviewing process - but he eventually got around to publishing as well.  
7/29/202114 minutes, 28 seconds
Episode Artwork

How Venture Capital Funded The Computing Industry

Investors have pumped capital into emerging markets since the beginning of civilization. Egyptians explored basic mathematics and used their findings to build larger structures and even granaries to allow merchants to store food and serve larger and larger cities. Greek philosophers expanded on those learnings and applied math to learn the orbits of planets, the size of the moon, and the size of the earth. Their merchants used the astrolabe to expand trade routes. They studied engineering and so learned how to leverage the six simple machines to automate human effort, developing mills and cranes to construct even larger buildings. The Romans developed modern plumbing and aqueducts and gave us concrete and arches and radiant heating and bound books and the postal system.  Some of these discoveries were state sponsored; others from wealthy financiers. Many an early investment was into trade routes, which fueled humanities ability to understand the world beyond their little piece of it and improve the flow of knowledge and mix found knowledge from culture to culture.  As we covered in the episode on clockworks and the series on science through the ages, many a scientific breakthrough was funded by religion as a means of wowing the people. And then autocrats and families who’d made their wealth from those trade routes. Over the centuries of civilizations we got institutions who could help finance industry.  Banks loan money using an interest rate that matches the risk of their investment. It’s illegal, going back to the Bible to overcharge on interest. That’s called usury, something the Romans realized during their own cycles of too many goods driving down costs and too few fueling inflation. And yet, innovation is an engine of economic growth - and so needs to be nurtured.  The rise of capitalism meant more and more research was done privately and so needed to be funded. And the rise of intellectual property as a good. Yet banks have never embraced startups.  The early days of the British Royal Academy were filled with researchers from the elite. They could self-fund their research and the more doing research, the more discoveries we made as a society. Early American inventors tinkered in their spare time as well. But the pace of innovation has advanced because of financiers as much as the hard work and long hours. Companies like DuPont helped fuel the rise of plastics with dedicated research teams. Railroads were built by raising funds. Trade grew. Markets grew. And people like JP Morgan knew those markets when they invested in new fields and were able to grow wealth and inspire new generations of investors. And emerging industries ended up dominating the places that merchants once held in the public financial markets.  Going back to the Venetians, public markets have required regulation. As banking became more a necessity for scalable societies it too required regulation - especially after the Great Depression. And yet we needed new companies willing to take risks to keep innovation moving ahead., as we do today And so the emergence of the modern venture capital market came in those years with a few people willing to take on the risk of investing in the future. John Hay “Jock” Whitney was an old money type who also started a firm. We might think of it more as a family office these days but he had acquired 15% in Technicolor and then went on to get more professional and invest. Jock’s partner in the adventure was fellow Delta Kappa Epsilon from out at the University of Texas chapter, Benno Schmidt. Schmidt coined the term venture capital and they helped pivot Spencer Chemicals from a musicians plant to fertilizer - they’re both nitrates, right? They helped bring us Minute Maid. and more recently have been in and out of Herbalife, Joe’s Crab Shack, Igloo coolers, and many others. But again it was mostly Whitney money and while we tend to think of venture capital funds as having more than one investor funding new and enterprising companies.  And one of those venture capitalists stands out above the rest. Georges Doriot moved to the United States from France to get his MBA from Harvard. He became a professor at Harvard and a shrewd business mind led to him being tapped as the Director of the Military Planning Division for the Quartermaster General. He would be promoted to brigadier general following a number of massive successes in the research and development as part of the pre-World War II military industrial academic buildup.  After the war Doriot created the American Research and Development Corporation or ARDC with the former president of MIT, Karl Compton, and engineer-turned Senator Ralph Flanders - all of them wrote books about finance, banking, and innovation. They proved that the R&D for innovation could be capitalized to great return. The best example of their success was Digital Equipment Corporation, who they invested $70,000 in in 1957 and turned that into over $350 million in 1968 when DEC went public, netting over 100% a year of return. Unlike Whitney, ARDC took outside money and so Doriot became known as the first true venture capitalist. Those post-war years led to a level of patriotism we arguably haven’t seen since. John D. Rockefeller had inherited a fortune from his father, who built Standard Oil. To oversimplify, that company was broken up into a variety of companies including what we now think of as Exxon, Mobil, Amoco, and Chevron. But the family was one of the wealthiest in the world and the five brothers who survived John Jr built an investment firm they called the Rockefeller Brothers Fund. We might think of the fund as a social good investment fund these days. Following the war in 1951, John D Rockefeller Jr endowed the fund with $58 million and in 1956, deep in the Cold War, the fund president Nelson Rockefeller financed a study and hired Henry Kissinger to dig into the challenges of the United States. And then came Sputnik in 1957 and a failed run for the presidency of the United States by Nelson in 1960.  Meanwhile, the fund was helping do a lot of good but also helping to research companies Venrock would capitalize. The family had been investing since the 30s but Laurance Rockefeller had setup Venrock, a mashup of venture and Rockefeller. In Venrock, the five brothers, their sister, MIT’s Ted Walkowicz, and Harper Woodward banded together to sprinkle funding into now over 400 companies that include Apple, Intel, PGP, CheckPoint, 3Com, DoubleClick and the list goes on. Over 125 public companies have come out of the fund today with an unimaginable amount of progress pushing the world forward. The government was still doing a lot of basic research in those post-war years that led to standards and patents and pushing innovation forward in private industry. ARDC caught the attention of a number of other people who had money they needed to put to work. Some were family offices increasingly willing to make aggressive investments. Some were started by ARDC alumni such as Charlie Waite and Bill Elfers who with Dan Gregory founded Greylock Partners. Greylock has invested in everyone from Red Hat to Staples to LinkedIn to Workday to Palo Alto Networks to Drobo to Facebook to Zipcar to Nextdoor to OpenDNS to Redfin to ServiceNow to Airbnb to Groupon to Tumblr to Zenprise to Dropbox to IFTTT to Instagram to Firebase to Wandera to Sumo Logic to Okta to Arista to Wealthfront to Domo to Lookout to SmartThings to Docker to Medium to GoFundMe to Discord to Houseparty to Roblox to Figma. Going on 800 investments just since the 90s they are arguably one of the greatest venture capital firms of all time.  Other firms came out of pure security analyst work. Hayden, Stone, & Co was co-founded by another MIT grad, Charles Hayden, who made his name mining copper to help wire up the world in what he expected to be an increasingly electrified world. Stone was a Wall Street tycoon and the two of them founded a firm that employed Joe Kennedy, the family patriarch, Frank Zarb, a Chairman of the NASDAQ and they gave us one of the great venture capitalists to fund technology companies, Arthur Rock.  Rock has often been portrayed as the bad guy in Steve Jobs movies but was the one who helped the “Traitorous 8” leave Shockley Semiconductor and after their dad (who had an account at Hayden Stone) mentioned they needed funding, got serial entrepreneur Sherman Fairchild to fund Fairchild Semiconductor. He developed tech for the Apollo missions, flashes, spy satellite photography - but that semiconductor business grew to 12,000 people and was a bedrock of forming what we now call Silicon Valley. Rock ended up moving to the area and investing. Parlaying success in an investment in Fairchild to invest in Intel when Moore and Noyce left Fairchild to co-found it.  Venture Capital firms raise money from institutional investors that we call limited partners and invest that money. After moving to San Francisco, Rock setup Davis and Rock, got some limited partners, including friends from his time at Harvard and invested in 15 companies, including Teledyne and Scientific Data Systems, which got acquired by Xerox, taking their $257,000 investment to a $4.6 million dollar valuation in 1970 and got him on the board of Xerox. He dialed for dollars for Intel and raised another $2.5 million in a couple of hours, and became the first chair of their board. He made all of his LPs a lot of money. One of those Intel employees who became a millionaire retired young. Mike Markulla invested some of his money and Rock put in $57,000 - growing it to $14 million and went on to launch or invest in companies and make billions of dollars in the process.  Another firm that came out of the Fairchild Semiconductor days was Kleiner Perkins. They started in 1972, by founding partners Eugene Kleiner, Tom Perkins, Frank Caufield, and Brook Byers. Kleiner was the leader of those Traitorous 8 who left William Shockley and founded Fairchild Semiconductor. He later hooked up with former HP head of Research and Development and yet another MIT and Harvard grad, Bill Perkins. Perkins would help Corning, Philips, Compaq, and Genentech - serving on boards and helping them grow.  Caufield came out of West Point and got his MBA from Harvard as well. He’d go on to work with Quantum, AOL, Wyse, Verifone, Time Warner, and others.  Byers came to the firm shortly after getting his MBA from Stanford and started four biotech companies that were incubated at Kleiner Perkins - netting the firm over $8 Billion dollars. And they taught future generations of venture capitalists. People like John Doerr - who was a great seller at Intel but by 1980 graduated into venture capital bringing in deals with Sun, Netscape, Amazon, Intuit, Macromedia, and one of the best gambles of all time - Google. And his reward is a net worth of over $11 billion dollars. But more importantly to help drive innovation and shape the world we live in today.  Kleiner Perkins was the first to move into Sand Hill Road. From there, they’ve invested in nearly a thousand companies that include pretty much every household name in technology. From there, we got the rise of the dot coms and sky-high rent, on par with Manhattan. Why? Because dozens of venture capital firms opened offices on that road, including Lightspeed, Highland, Blackstone, Accel-KKR, Silver Lake, Redpoint, Sequoia, and Andreesen Horowitz. Sequoia also started in the 70s, by Don Valentine and then acquired by Doug Leone and Michael Moritz in the 90s. Valentine did sales for Raytheon before joining National Semiconductor, which had been founded by a few Sperry Rand traitors and brought in some execs from Fairchild. They were venture backed and his background in sales helped propel some of their earlier investments in Apple, Atari, Electronic Arts, LSI, Cisco, and Oracle to success. And that allowed them to invest in a thousand other companies including Yahoo!, PayPal, GitHub, Nvidia, Instagram, Google, YouTube, Zoom, and many others.  So far, most of the firms have been in the US. But venture capital is a global trend.  Masayoshi Son founded Softbank in 1981 to sell software and then published some magazines and grew the circulation to the point that they were Japan’s largest technology publisher by the end of the 80s and then went public in 1994. They bought Ziff Davis publishing, COMDEX, and seeing so much technology and the money in technology, Son inked a deal with Yahoo! to create Yahoo! Japan. They pumped $20 million into Alibaba in 2000 and by 2014 that investment was worth $60 billion. In that time they became more aggressive with where they put their money to work. They bought Vodafone Japan, took over competitors, and then the big one - they bought Sprint, which they merged with T-Mobile and now own a quarter of the combined companies. An important aspect of venture capital and private equity is multiple expansion. The market capitalization of Sprint more than doubled with shares shooting up over 10%. They bought Arm Limited, the semiconductor company that designs the chips in so many a modern phone, IoT device, tablet and even computer now. As with other financial firms, not all investments can go great. SoftBank pumped nearly $5 billion into WeWork. Wag failed. 2020 saw many in staff reductions. They had to sell tens of billions in assets  to weather the pandemic. And yet with some high profile losses, they sold ARM for a huge profit, Coupang went public and investors in their Vision Funds are seeing phenomenal returns across over 200 companies in the portfolios. Most of the venture capitalists we mentioned so far invested as early as possible and stuck with the company until an exit - be it an IPO, acquisition, or even a move into private equity. Most got a seat on the board in exchange for not only their seed capital, or the money to take products to market, but also their advice. In many a company the advice was worth more than the funding. For example, Randy Komisar, now at Kleiner Perkins, famously recommended TiVo sell monthly subscriptions, the growth hack they needed to get profitable. As the venture capital industry grew and more and more money was being pumped into fueling innovation, different accredited and institutional investors emerged to have different tolerances for risk and different skills to bring to the table. Someone who built an enterprise SaaS company and sold within three years might be better served to invest in and advise another company doing the same thing. Just as someone who had spent 20 years running companies that were at later stages and taking them to IPO was better at advising later stage startups who maybe weren’t startups any more. Here’s a fairly common startup story. After finishing a book on Lisp, Paul Graham decides to found a company with Robert Morris. That was Viaweb in 1995 and one of the earliest SaaS startups that hosted online stores - similar to a Shopify today. Viaweb had an investor named Julian Weber, who invested $10,000 in exchange for 10% of the company. Weber gave them invaluable advice and they were acquired by Yahoo! for about $50 million in stock in 1998, becoming the Yahoo Store.  Here’s where the story gets different. 2005 and Graham decides to start doing seed funding for startups, following the model that Weber had established with Viaweb. He and Viaweb co-founders Robert Morris (the guy that wrote the Morris worm) and Trevor Blackwell start Y Combinator, along with Jessica Livingston. They put in $200,000 to invest in companies and with successful investments grew to a few dozen companies a year. They’re different because they pick a lot of technical founders (like themselves) and help the founders find product market fit, finish their solutions, and launch. And doing so helped them bring us Airbnb, Doordash, Reddit, Stripe, Dropbox and countless others. Notice that many of these firms have funded the same companies. This is because multiple funds investing in the same company helps distribute risk. But also because in an era where we’ve put everything from cars to education to healthcare to innovation on an assembly line, we have an assembly line in companies. We have thousands of angel investors, or humans who put capital to work by investing in companies they find through friends, family, and now portals that connect angels with companies.  We also have incubators, a trend that began in the late 50s in New York when Jo Mancuso opened a warehouse up for small tenants after buying a warehouse to help the town of Batavia. The Batavia Industrial Center provided office supplies, equipment, secretaries, a line of credit, and most importantly advice on building a business. They had made plenty of money on chicken coops and though that maybe helping companies start was a lot like incubating chickens and so incubators were born.  Others started incubating. The concept expanded from local entrepreneurs helping other entrepreneurs and now cities, think tanks, companies, and even universities, offer incubation in their walls. Keep in mind many a University owns a lot of patents developed there and plenty of companies have sprung up to commercialize the intellectual property incubated there. Seeing that and how technology companies needed to move faster we got  accelerators like Techstars, founded by David Cohen, Brad Feld, David Brown, and Jared Polis in 2006 out of Boulder, Colorado. They have worked with over 2,500 companies and run a couple of dozen programs. Some of the companies fail by the end of their cohort and yet many like Outreach and Sendgrid grow and become great organizations or get acquired. The line between incubator and accelerator can be pretty slim today. Many of the earlier companies mentioned are now the more mature venture capital firms. Many have moved to a focus on later stage companies with YC and Techstars investing earlier. They attend the demos of companies being accelerated and invest. And the fact that founding companies and innovating is now on an assembly line, the companies that invest in an A round of funding, which might come after an accelerator, will look to exit in a B round, C round, etc. Or may elect to continue their risk all the way to an acquisition or IPO.  And we have a bevy of investing companies focusing on the much later stages. We have private equity firms and family offices that look to outright own, expand, and either harvest dividends from or sell an asset, or company. We have traditional institutional lenders who provide capital but also invest in companies. We have hedge funds who hedge puts and calls or other derivatives on a variety of asset classes. Each has their sweet spot even if most will opportunistically invest in diverse assets. Think of the investments made as horizons. The Angel investor might have their shares acquired in order to clean up the cap table, or who owns which parts of a company, in later rounds. This simplifies the shareholder structure as the company is taking on larger institutional investors to sprint towards and IPO or an acquisition. People like Arthur Rock, Tommy Davis, Tom Perkins, Eugene Kleiner, Doerr, Masayoshi Son, and so many other has proven that they could pick winners. Or did they prove they could help build winners? Let’s remember that investing knowledge and operating experience were as valuable as their capital. Especially when the investments were adjacent to other successes they’d found. Venture capitalists invested more than $10 billion in 1997. $600 million of that found its way to early-stage startups. But most went to preparing a startup with a product to take it to mass market. Today we pump more money than ever into R&D - and our tax systems support doing so more than ever. And so more than ever, venture money plays a critical role in the life cycle of innovation. Or does venture money play a critical role in the commercialization of innovation? Seed accelerators, startup studios, venture builders, public incubators, venture capital firms, hedge funds, banks - they’d all have a different answer. And they should. Few would stick with an investment like Digital Equipment for as long as ARDC did. And yet few provide over 100% annualized returns like they did.  As we said in the beginning of this episode, wealthy patrons from Pharaohs to governments to industrialists to now venture capitalists have long helped to propel innovation, technology, trade, and intellectual property. We often focus on the technology itself in computing - but without the money the innovation either wouldn’t have been developed or if developed wouldn’t have made it to the mass market and so wouldn’t have had an impact into our productivity or quality of life.  The knowledge that comes with those who provide the money can be seen with irreverence. Taking an innovation to market means market-ing. And sales. Most generations see the previous generations as almost comedic, as we can see in the HBO show Silicon Valley when the cookie cutter industrialized approach goes too far. We can also end up with founders who learn to sell to investors rather than raising capital in the best way possible, selling to paying customers. But there’s wisdom from previous generations when offered and taken appropriately. A coachable founder with a vision that matches the coaching and a great product that can scale is the best investment that can be made. Because that’s where innovation can change the world.
7/24/202130 minutes, 14 seconds
Episode Artwork

Albert Cory Talks About His New Book, Inventing The Future

Author Albert Cory joins the podcast in this episode to talk about his new book, Inventing the Future. Inventing the Future was a breath of fresh air from an inspirational time and person. Other books have told the story of how the big names in computing were able to commercialize many of the innovations that came out of Xerox PARC. But Inventing the Future adds a really personal layer that ties in the culture of the day (music, food, geography, and even interpersonal relationships) to what was happening in computing - that within a couple of decades would wildly change how we live our lives.  We’re lucky he made the time to discuss his take on a big evolution in modern technology through the lens of historical fiction. I would absolutely recommend the book to academics and geeks and just anyone looking to expand their minds. And we look forward to having him on again!
7/16/202145 minutes, 45 seconds
Episode Artwork

Where Fast Food Meets Point of Sale, Automation, and Computing

Roy Allen opened his first root beer stand in 1919, in Lodi, California. He’d bought a recipe for root beer and boy, it sure was a hit. He brought in people to help. One was Frank Wright, who would become a partner in the endeavor and they’d change the name to A&W Root Beer, for their names, and open a restaurant in 1923 in Sacramento, California. Allen bought Wright back out in 1925, but kept the name. Having paid for the root beer license he decided to franchise out the use of that - but let’s not call that the first fast food chain just yet. After all, it was just a license to make root beer just like he’d bought the recipe all those years ago.  A&W’s Allen sold the company in 1950 to retire. The franchise agreements moved from a cash payment to royalties. But after Allen the ownership of the company bounced around until it landed with United Fruit which would become United Brands, who took A&W to the masses and the root beer company was split from the restaurant chain with the chain eventually owned by Yum! Brands now nearly 1,000 locations and over $300M in revenues.  White Castle  As A&W franchised, some experimented with other franchising options or with not going that route at all. Around the same time Wright opened his first stand, Walt Anderson was running a few food stands around Witchita. He met up with Billy Ingram and in 1921 they opened the first White Castle, putting in $700 of their own money. By 1927 they expanded out to Indianapolis. As is often the case, the original cook with the concept sold out his part of the business in 1933 when they moved their headquarters to Columbus, Ohio and the Ingram family expanded all over the United States. Many a fast food chain is franchised but White Castle has stayed family owned and operates profitably not taking on debt to grow.  Kentucky Fried Chicken  KFC îs fried chicken. They sell some other stuff I guess. They were started by Harland Sanders in 1930 but as we see with a lot of these they didn’t start franchising until after the war. His big hack was to realize he needed to cook chicken faster to serve more customers and so he converted a pressure cooker into a pressure fryer, completely revolutionizing how food is fried.  He perfected his original recipe in 1940 and by 1952 was able to parlay the success of his early success into franchising out what is now the second largest fast food chain in the world. But the largest is McDonald’s.  McDonalds 1940 comes around and Richard and Maurice McDonald open a little restaurant called McDonalds. It was a drive-up barbecue joint in San Bernadino. But drive-in restaurants were getting competitive and while looking back at the business, they realized that four fifths of the sales were hamburgers. So they shut down for a bit and got rid of the car hops that were popular at the time, simplified the menu and trimmed out everything they could - getting down to less than 10 items on the menu.  They were able to get prices down to 15 cent hamburgers using something they called the Speedee Service System. That was an assembly-line of food preparation that became the standard in the fast food industry over the next few decades. They also looked at industrial equipment and used that to add french fries and shakes, which finally unlocked an explosion of sales and profits doubled.  But then the milkshake mixer salesman payed a visit to them in San Bernadino to see why the brothers need 8 of his mixers and was amazed to find they were, in fact, cranking out 48 shakes at a time with them. The assembly-line opened his eyes and he bought the rights to franchise the McDonalds concept opening his first one in Des Plaines, Illinois. One of the best growth hacks for any company is just to have an amazing sales and marketing arm. OK, so not a hack but just good business. And Ray Kroc will go down as one of the greatest. From those humble beginnings selling milkshake mixers he moved from licensing to buying the company outright for $2.7 million dollars in 1961.  Another growth hack was to realize, thanks to a former VP at Tastee-Freez, that owning the real estate brought yet another revenue stream. A low deposit and a 20% or higher increase in the monthly spend would grow into a nearly 38 billion dollar revenue stream.  The highway system was paying dividends to the economy. People were moving out to the suburbs. Cars were shipping in the highest volumes ever. They added the filet-o-fish and were exploding internationally in the 60s and 70s and now sitting on over 39,000 stores with about a $175 billion market cap with over $5 billion dollars in revenue. Diners, Drive-ins, and Dives Those post-war years were good to fast food. Anyone that’s been to a 50s themed restaurant can see the car culture on display and drive-ins were certainly a part of that. People were living their lives at a new pace to match the speed of those cars and it was a golden age of growth in the United States. The computer industry was growing right along with those diners, drive-ins, and dives.  One company that started before World War II and grew fast was Dairy Queen, started in 1940 by John Fremont McCullough. He’d invented soft-serve ice cream in 1938 and opened the first Dairy Queen in Joliet, Illinois with his friend Sherb Noble, who’d been selling his soft-serve ice cream out of his shop for a couple of years. During those post-war 1950s explosive years they introduced the Dilly Bar and have now expanded to 6,800 locations around the world.  William Rosenberg opened a little coffee shop in in Quincy, Massachusetts. As with the others in this story, he parlayed quick successes and started to sell franchises in 1955 and Dunkin’ Donuts grew to 12,400 locations.  In-N-Out Burger started in 1948 as well, by Harry and Esther Snyder and while they’ve only expanded around the west coast of the US, they’ve grown to around 350 locations and stay family owned.  Pizza Hut was started in 1958 in Wichita, Kanas. While it was more of a restaurant for a long time, it’s now owned by Yum! Brands and operates well over 18,000 locations. Yum! Also owns KFC and Taco Bell. Glen Bell served as a cook in World War II and moved to San Bernardino to open a drive-in hot dog stand in 1948. He sold it and started a taco stand, selling them for 19 cents a piece, expanding to three locations by 1955 and went serial entrepreneur - selling those locations and opening four new ones he called El Tacos down in Long Beach. He sold that to his partner in 1962 and started his first Taco Bell, finally ready to start selling franchises in 1964 and grew it to 100 restaurants by 1967.  They took Taco Bell public in 1970 when they had 325 locations. And Pepsi bought the 868 location in 1978 for $125 million in stock, eventually spinning the food business off to what is now called Yum! Brands and co-branding with cousin restaurants in that portfolio - Pizza Hut and Long John Silver’s. I haven’t been to a Long John Silver’s since I was a kid but they still have over a thousand locations and date back to a hamburger stand started in 1929 that over the years pivoted to a roast beef sandwich shop and pivoting many times until landing on the fish and chips concept in 1969.  The Impact of Computing It’s hard to imagine that any of these companies could have grown the way they did without more than an assembly-line of human automation. Mechanical cash registers had been around since the Civil War in the United States, with early patents filed in 1883 by Charles Kettering and James Ritty. Arguably the abacus and counting frame goes back way further but the Ritty Model I patent was sparked the interest of Jacob Eckert who bought the patent, added some features and took on $10,000 in debt to take the cash register to market, forming National Manufacturing Company. That became National Cash Register still a more than 6 billion dollar market cap company. But the growth of IBM and other computing companies, the release of semiconductors, and the miniaturization and dropping costs of printed circuit boards helped lead to the advent of electronic cash registers. After all those are just purpose-built computers. IBM introduced the first point of sale system in 1973, bringing that cash register into the digital age. Suddenly a cash register could be in the front as a simplified terminal to send print outs or information to a screen in the back.  Those IBM 3650s evolved to the first use of peer-to-peer client-server technology and ended up in Dillard’s in 1974. That same year McDonald’s had William Brobeck and Associates develop a microprocessor-based terminal. It was based on the Intel 8008 chip and used a simple push-button device to allow cashiers to enter orders. This gave us a queue of orders being sent by terminals in the front. And we got touchscreens registers in 1986, running on the Atari 520ST, with IBM introducing a 486-based system running on FlexOS.  Credit Cards As we moved into the 90s, fast food chains were spreading fast and the way we payed for goods was starting to change. All these electronic registers could suddenly send the amount owed over an electronic link to a credit card processing machine.  John Biggins launched the Charg-it card in 1946 and it spread to Franklin National Bank a few years later. Diners Club Card picked up on the trend and launched the Diners Club Card in 1950, growing to 20,000 cardholders in 1951. American Express came along in 1958 with their card and in just five years grew to a million cards. Bank of America released their BankAmericard in 1958, which became the first general-purpose credit card. They started in California and went national in the first ten years. That would evolve into Visa by 1966 and by 1966 we got MasterCard as well. THat’s also the same year the Barclaycard brought credit cards outside the US for the first time, showing up first in England. Then Carte bleue in 67 in France and the Eurocard as a collaboration between the Wallenberg family and Interbank in 1968 to serve the rest of Europe. Those spread and by the 90s we had enough people using them to reach a critical mass where fast food needed to take them as well. Whataburger and Carl’s Jr added the option in 1989, Arby’s in 1990,  and while slower to adopt taking cards, McDonald’s finally did so in 2002. We were well on our way to becoming a cashless society. And the rise of the PC led to POS systems moving a little down-market and systems from and others like Aloha, designed in 1998 (now owned by NCR). And lots of other brands of devices as well as home-brewed tooling from large vendors.   And computers helped revolutionize the entire organization. Chains could automate supply lines to stores with computerized supply chain management. Desktop computers also led to management functions being computerized in the back office, like scheduling and time clocks and so less managers were needed. That was happening all over post-War America by the 90s. Post-War America  In that era after World War II people were fascinated with having the same experiences over and over - and having them be identical. Think about it, before the war life was slower and every meal required work. After it was fast and the food always came out hot and felt like a suburban life, wherever you were. Even when that white flight was destroying city centers and the homogeneity leading to further centralized organizations dividing communities.  People flocked to open these restaurants. They could make money, it was easier to get a loan to open a store with a known brand, there were high profit margins, and in a lot of cases, there was a higher chance of success than many other industries. This leads to even more homogeneity. That rang true for other types of franchising on the rise as well. Fast food became a harbinger of things to come and indicative of other business trends as well. These days we think of high fructose corn syrup, fried food, and GMOs when we think of fast food. And that certainly led to the rise. People who eat fast food want that. Following the first wave of fast food we got other brands rising as well. Arby’s was founded in 1964, Subway in 1965, Wendy’s in 1969, Jack in the Box in 1961, Chick-fil-A in 1946, just a few miles from where I was born. And newer chains like Quiznos in 1981, Jimmy John’s in 1983, and Chipotle in 1993. These touch other areas of the market focusing on hotter, faster, or spicier.  From the burger craze to the drive-in craze to just plain fast, fast food has been with us since long before anyone listening to this episode was born and is likely to continue on long after we’re gone. Love it or hate it, it’s a common go-to when we’re working on systems - especially far from home.  And the industry continues to evolve. A barrier to opening any type of retail chain was once the point of sale system. Another was finding a way to accept credit cards. Stripe emerged to help with the credit cards and a cadre of tablet and app-based solutions for the iPhone, Android, and tablets emerged to help make taking credit cards simple for new businesses. A lot of the development was once put into upmarket solutions but these days downmarket is so much more approachable. And various fraud prevention machine learning algorithms and chip and pin technologies makes taking a credit card for a simple transaction safer than ever.  The Future The fast food and retail in general continues to evolve. The next evolution seems to be self-service. This is well underway but a number of companies are looking at kiosks to take orders and all those cashiers might find RFID tags as another threat to their jobs. If a machine can see what’s in a cart on the way out of a store there’s no need for cashiers. Here, we see the digitization as one wave of technology but given the inexpensive cost of labor we are just now seeing the cost of the technology come down to where it’s cheaper. Much as the cost of clockworks and then industrialization caused first the displacement of Roman slave labor and then workers in factories. Been to a parking ramp recently? That’s a controlled enough environment where the people were some of the first to be replaced with simple computers that processed first magnetic stripes and now license plates using simple character recognition technology. Another revolution that has already begun is how we get the food. Grubhub launched in 2004, we got Postmates in 2011, and DoorDash came in 2013 to make it where we don’t even have to leave the house to get our burger fix. We can just open an app, use our finger print to check out, and have items show up at our homes often in less time than if we’d of gone to pick it up. And given that they have a lot of drivers and know exactly where they are, Uber attempted to merge with DoorDash in 2019, but that’s fine because they’d already launched Uber Eats in 2014. But DoorDash has about half that market at $2.9 billion in revenues for 2020 and that’s just with 18 million users - still less than 10% of US households. I guess that’s why DoorDash enjoys a nearly $60 billion market cap. We are in an era of technology empires. And yet McDonald’s is only worth about three times what DoorDash is worth and guess which one is growing faster. Empires come and go.  The ability to manage an empire that scales larger than the technology and communications capabilities allows for was a downfall of many an empire - from Rome to Poland to the Russian Czarist empire. Each was profoundly changed by splitting up the empire as with Rome, becoming a pawn between neighboring empires, or even the development of an entirely new system of governance, as with Russia. Fast food employs four and a half million people in the US today, with another almost 10 million people employed globally. About half of those are adults. An industry that’s grown from revenues of just $6 billion to a half trillion dollar industry since just 1970. And those employees often make minimum wage. Think about this, that’s over twice the number of slaves as there were in the Roman Empire. Many of whom rose up to conquer the empire. And the name of the game is automation. Has been since that McDonald’s Speedee Service System that enthralled Ray Kroc. But the human labor will some day soon be drastically cut. Just as the McDonald brothers cut car hops from their roster all those years ago. And that domino will knock down others in every establishment we walk into to pay for goods. Probably not in the next 5 years, but certainly in my lifetime. Job displacement due to technology is nothing new. It goes back past the Romans. But it is accelerating faster than at other points in history. And you have to wonder what kinds of socio, political, and economical repercussions we’ll have. Add in other changes around the world and the next few decades will be interesting to watch. 
7/16/202124 minutes, 59 seconds
Episode Artwork

A broad overview of how the Internet happened

The Internet is not a simple story to tell. In fact, every sentence here is worthy of an episode if not a few.  Many would claim the Internet began back in 1969 when the first node of the ARPAnet went online. That was the year we got the first color pictures of earthen from Apollo 10 and the year Nixon announced the US was leaving Vietnam. It was also the year of Stonewall, the moon landing, the Manson murders, and Woodstock. A lot was about to change. But maybe the story of the Internet starts before that, when the basic research to network computers began as a means of networking nuclear missile sites with fault-tolerant connections in the event of, well, nuclear war. Or the Internet began when a T3 backbone was built to host all the datas. Or the Internet began with the telegraph, when the first data was sent over electronic current. Or maybe the Internet began when the Chinese used fires to send messages across the Great Wall of China. Or maybe the Internet began when drums sent messages over long distances in ancient Africa, like early forms of packets flowing over Wi-Fi-esque sound waves.  We need to make complex stories simpler in order to teach them, so if the first node of the ARPAnet in 1969 is where this journey should end, feel free to stop here. To dig in a little deeper, though, that ARPAnet was just one of many networks that would merge into an interconnected network of networks. We had dialup providers like CompuServe, America Online, and even The WELL. We had regional timesharing networks like the DTSS out of Dartmouth University and PLATO out of the University of Illinois, Champaign-Urbana. We had corporate time sharing networks and systems. Each competed or coexisted or took time from others or pushed more people to others through their evolutions. Many used their own custom protocols for connectivity. But most were walled gardens, unable to communicate with the others.  So if the story is more complicated than that the ARPAnet was the ancestor to the Internet, why is that the story we hear? Let’s start that journey with a memo that we did an episode on called “Memorandum For Members and Affiliates of the Intergalactic Computer Network” sent by JCR Licklider in 1963 and can be considered the allspark that lit the bonfire called The ARPANet. Which isn’t exactly the Internet but isn’t not. In that memo, Lick proposed a network of computers available to research scientists of the early 60s. Scientists from computing centers that would evolve into supercomputing centers and then a network open to the world, even our phones, televisions, and watches. It took a few years, but eventually ARPA brought in Larry Roberts, and by late 1968 ARPA awarded an RFQ to build a network to a company called Bolt Beranek and Newman (BBN) who would build Interface Message Processors, or IMPs. The IMPS were computers that connected a number of sites and routed traffic. The first IMP, which might be thought of more as a network interface card today, went online at UCLA in 1969 with additional sites coming on frequently over the next few years. That system would become ARPANET. The first node of ARPAnet went online at the University of California, Los Angeles (UCLA for short). It grew as leased lines and more IMPs became more available. As they grew, the early computer scientists realized that each site had different computers running various and random stacks of applications and different operating systems. So we needed to standardize certain aspects connectivity between different computers.  Given that UCLA was the first site to come online, Steve Crocker from there began organizing notes about protocols and how systems connected with one another in what they called RFCs, or Request for Comments. That series of notes was then managed by a team that included Elizabeth (Jake) Feinler from Stanford once Doug Engelbart’s project on the “Augmentation of Human Intellect” at Stanford Research Institute (SRI) became the second node to go online. SRI developed a Network Information Center, where Feinler maintained a list of host names (which evolved into the hosts file) and a list of address mappings which would later evolve into the functions of Internic which would be turned over to the US Department of Commerce when the number of devices connected to the Internet exploded. Feinler and Jon Postel from UCLA would maintain those though, until his death 28 years later and those RFCs include everything from opening terminal connections into machines to file sharing to addressing and now any place where the networking needs to become a standard.  The development of many of those early protocols that made computers useful over a network were also being funded by ARPA. They funded a number of projects to build tools that enabled the sharing of data, like file sharing and some advancements were loosely connected by people just doing things to make them useful and so by 1971 we also had email. But all those protocols needed to flow over a common form of connectivity that was scalable. Leonard Kleinrock, Paul Baran, and Donald Davies were independently investigating packet switching and Roberts brought Kleinrock into the project as he was at UCLA. Bob Kahn entered the picture in 1972. He would team up with Vint Cerf from Stanford who came up with encapsulation and so they would define the protocol that underlies the Internet, TCP/IP. By 1974 Vint Cerf and Bob Kahn wrote RFC 675 where they coined the term internet as shorthand for internetwork. The number of RFCs was exploding as was the number of nodes. The University of California Santa Barbara then the University of Utah to connect Ivan Sutherland’s work. The network was national when BBN connected to it in 1970. Now there were 13 IMPs and by 1971, 18, then 29 in 72 and 40 in 73. Once the need arose, Kleinrock would go on to work with Farouk Kamoun to develop the hierarchical routing theories in the late 70s. By 1976, ARPA became DARPA. The network grew to 213 hosts in 1981 and by 1982, TCP/IP became the standard for the US DOD and in 1983, ARPANET moved fully over to TCP/IP. And so TCP/IP, or Transport Control Protocol/Internet Protocol is the most dominant networking protocol on the planet. It was written to help improve performance on the ARPAnet with the ingenious idea to encapsulate traffic. But in the 80s, it was just for researchers still. That is, until NSFNet was launched by the National Science Foundation in 1986.  And it was international, with the University College of London connecting in 1971, which would go on to inspire a British research network called JANET that built their own set of protocols called the Colored Book protocols. And the Norwegian Seismic Array connected over satellite in 1973. So networks were forming all over the place, often just time sharing networks where people dialed into a single computer. Another networking project going on at the time that was also getting funding from ARPA as well as the Air Force was PLATO. Out of the University of Illinois, was meant for teaching and began on a mainframe in 1960. But by the time ARPAnet was growing PLATO was on version IV and running on a CDC Cyber. The time sharing system hosted a number of courses, as they referred to programs. These included actual courseware, games, convent with audio and video, message boards, instant messaging, custom touch screen plasma displays, and the ability to dial into the system over lines, making the system another early network. In fact, there were multiple CDC Cybers that could communicate with one another. And many on ARPAnet also used PLATO, cross pollinating non-defense backed academia with a number of academic institutions.  The defense backing couldn’t last forever. The Mansfield Amendment in 1973 banned general research by defense agencies. This meant that ARPA funding started to dry up and the scientists working on those projects needed a new place to fund their playtime. Bob Taylor split to go work at Xerox, where he was able to pick the best of the scientists he’d helped fund at ARPA. He helped bring in people from Stanford Research Institute, where they had been working on the oNLineSystem, or NLS and people like Bob Metcalfe who brought us Ethernet and better collusion detection. Metcalfe would go on to found 3Com a great switch and network interface company during the rise of the Internet. But there were plenty of people who could see the productivity gains from ARPAnet and didn’t want it to disappear. And the National Science Foundation (NSF) was flush with cash. And the ARPA crew was increasingly aware of non-defense oriented use of the system. So the NSF started up a little project called CSNET in 1981 so the growing number of supercomputers could be shared between all the research universities. It was free for universities that could get connected and from 1985 to 1993 NSFNET, surged from 2,000 users to 2,000,000 users. Paul Mockapetris made the Internet easier than when it was an academic-only network by developing the Domain Name System, or DNS, in 1983. That’s how we can call up remote computers by names rather than IP addresses. And of course DNS was yet another of the protocols in Postel at UCLAs list of protocol standards, which by 1986 after the selection of TCP/IP for NSFnet, would become the standardization body known as the IETF, or Internet Engineering Task Force for short. Maintaining a set of protocols that all vendors needed to work with was one of the best growth hacks ever. No vendor could have kept up with demand with a 1,000x growth in such a small number of years. NSFNet started with six nodes in 1985, connected by LSI-11 Fuzzball routers and quickly outgrew that backbone. They put it out to bid and Merit Network won out in a partnership between MCI, the State of Michigan, and IBM. Merit had begun before the first ARPAnet connections went online as a collaborative effort by Michigan State University, Wayne State University, and the University of Michigan. They’d been connecting their own machines since 1971 and had implemented TCP/IP and bridged to ARPANET. The money was getting bigger, they got $39 million from NSF to build what would emerge as the commercial Internet.  They launched in 1987 with 13 sites over 14 lines. By 1988 they’d gone nationwide going from a 56k backbone to a T1 and then 14 T1s. But the growth was too fast for even that. They re-engineered and by 1990 planned to add T3 lines running in parallel with the T1s for a time. By 1991 there were 16 backbones with traffic and users growing by an astounding 20% per month.  Vint Cerf ended up at MCI where he helped lobby for the privatization of the internet and helped found the Internet Society in 1988. The lobby worked and led to the the Scientific and Advanced-Technology Act in 1992. Before that, use of NSFNET was supposed to be for research and now it could expand to non-research and education uses. This allowed NSF to bring on even more nodes. And so by 1993 it was clear that this was growing beyond what a governmental institution whose charge was science could justify as “research” for any longer.  By 1994, Vent Cerf was designing the architecture and building the teams that would build the commercial internet backbone at MCI. And so NSFNET began the process of unloading the backbone and helped the world develop the commercial Internet by sprinkling a little money and know-how throughout the telecommunications industry, which was about to explode. NSFNET went offline in 1995 but by then there were networks in England, South Korea, Japan, Africa, and CERN was connected to NSFNET over TCP/IP. And Cisco was selling routers that would fuel an explosion internationally. There was a war of standards and yet over time we settled on TCP/IP as THE standard.  And those were just some of the nets. The Internet is really not just NSFNET or ARPANET but a combination of a lot of nets. At the time there were a lot of time sharing computers that people could dial into and following the release of the Altair, there was a rapidly growing personal computer market with modems becoming more and more approachable towards the end of the 1970s. You see, we talked about these larger networks but not hardware.  The first modulator demodulator, or modem, was the Bell 101 dataset, which had been invented all the way back in 1958, loosely based on a previous model developed to manage SAGE computers. But the transfer rate, or baud, had stopped being improved upon at 300 for almost 20 years and not much had changed. That is, until Hayes Hayes Microcomputer Products released a modem designed to run on the Altair 8800 S-100 bus in 1978. Personal computers could talk to one another.  And one of those Altair owners was Ward Christensen met Randy Suess at the Chicago Area Computer Hobbyists’ Exchange and the two of them had this weird idea. Have a computer host a bulletin board on one of their computers. People could dial into it and discuss their Altair computers when it snowed too much to meet in person for their club. They started writing a little code and before you know it we had a tool they called Computerized Bulletin Board System software, or CBBS. The software and more importantly, the idea of a BBS spread like wildfire right along with the Atari, TRS-80, Commodores and Apple computers that were igniting the personal computing revolution. The number of nodes grew and as people started playing games, the speed of those modems jumped up with the v.32 standard hitting 9600 baud in 84, and over 25k in the early 90s. By the early 1980s, we got Fidonet, which was a network of Bulletin Board Systems and by the early 90s we had 25,000 BBS’s. And other nets had been on the rise. And these were commercial ventures. The largest of those dial-up providers was America Online, or AOL. AOL began in 1985 and like most of the other dial-up providers of the day were there to connect people to a computer they hosted, like a timesharing system, and give access to fun things. Games, news, stocks, movie reviews, chatting with your friends, etc. There was also CompuServe, The Well, PSINet, Netcom, Usenet, Alternate, and many others. Some started to communicate with one another with the rise of the Metropolitan Area Exchanges who got an NSF grant to establish switched ethernet exchanges and the Commercial Internet Exchange in 1991, established by PSINet, UUNet, and CERFnet out of California.  Those slowly moved over to the Internet and even AOL got connected to the Internet in 1989 and thus the dial-up providers went from effectively being timesharing systems to Internet Service Providers as more and more people expanded their horizons away from the walled garden of the time sharing world and towards the Internet. The number of BBS systems started to wind down. All these IP addresses couldn’t be managed easily and so IANA evolved out of being managed by contracts from research universities to DARPA and then to IANA as a part of ICANN and eventually the development of Regional Internet Registries so AFRINIC could serve Africa, ARIN could serve Antarctica, Canada, the Caribbean, and the US, APNIC could serve South, East, and Southeast Asia as well as Oceania LACNIC could serve Latin America and RIPE NCC could serve Europe, Central Asia, and West Asia. By the 90s the Cold War was winding down (temporarily at least) so they even added Russia to RIPE NCC. And so using tools like WinSOCK any old person could get on the Internet by dialing up. Modems for dial-ups transitioned to DSL and cable modems. We got the emergence of fiber with regional centers and even national FiOS connections. And because of all the hard work of all of these people and the money dumped into it by the various governments and research agencies, life is pretty darn good.  When we think of the Internet today we think of this interconnected web of endpoints and content that is all available. Much of that was made possible by the development of the World Wide Web by Tim Berners-Lee in in 1991 at CERN, and Mosaic came out of the National Center for Supercomputing applications, or NCSA at the University of Illinois, quickly becoming the browser everyone wanted to use until Mark Andreeson left to form Netscape. Netscape’s IPO is probably one of the most pivotal moments where investors from around the world realized that all of this research and tech was built on standards and while there were some patents, the standards were freely useable by anyone.  Those standards let to an explosion of companies like Yahoo! from a couple of Stanford grad students and Amazon, started by a young hedge fund Vice President named Jeff Bezos who noticed all the money pouring into these companies and went off to do his own thing in 1994. The companies that arose to create and commercialize content and ideas to bring every industry online was ferocious.  And there were the researchers still writing the standards and even commercial interests helping with that. And there were open source contributors who helped make some of those standards easier to implement by regular old humans. And tools for those who build tools. And from there the Internet became what we think of today. Quicker and quicker connections and more and more productivity gains, a better quality of life, better telemetry into all aspects of our lives and with the miniaturization of devices to support wearables that even extends to our bodies. Yet still sitting on the same fundamental building blocks as before. The IANA functions to manage IP addressing has moved to the private sector as have many an onramp to the Internet. Especially as internet access has become more ubiquitous and we are entering into the era of 5g connectivity.  And it continues to evolve as we pivot due to new needs and threats a globally connected world represent. IPv6, various secure DNS options, options for spam and phishing, and dealing with the equality gaps  surfaced by our new online world. We have disinformation so sometimes we might wonder what’s real and what isn’t. After all, any old person can create a web site that looks legit and put whatever they want on it. Who’s to say what reality is other than what we want it to be. This was pretty much what Morpheus was offering with his choices of pills in the Matrix. But underneath it all, there’s history. And it’s a history as complicated as unraveling the meaning of an increasingly digital world. And it is wonderful and frightening and lovely and dangerous and true and false and destroying the world and saving the world all at the same time.  This episode is pretty simplistic and many of the aspects we cover have entire episodes of the podcast dedicated to them. From the history of Amazon to Bob Taylor to AOL to the IETF to DNS and even Network Time Protocol. It’s a story that leaves people out necessarily; otherwise scope creep would go all the way back to to include Volta and the constant electrical current humanity received with the battery. But hey, we also have an episode on that! And many an advance has plenty of books and scholarly works dedicated to it - all the way back to the first known computer (in the form of clockwork), the Antikythera Device out of Ancient Greece. Heck even Louis Gerschner deserves a mention for selling IBM’s stake in all this to focus on things that kept the company going, not moonshots.  But I’d like to dedicate this episode to everyone not mentioned due to trying to tell a story of emergent networks. Just because they were growing fast and our modern infrastructure was becoming more and more deterministic doesn’t mean that whether it was writing a text editor or helping fund or pushing paper or writing specs or selling network services or getting zapped while trying to figure out how to move current that there aren’t so, so, so many people that are a part of this story. Each with their own story to be told. As we round the corner into the third season of the podcast we’ll start having more guests. If you have a story and would like to join us use the email button on thehistoryofcomputing.net to drop us a line. We’d love to chat!
7/12/202129 minutes, 45 seconds
Episode Artwork

The History of Plastics in Computing

Nearly everything is fine in moderation. Plastics exploded as an industry in the post World War II boom of the 50s and on - but goes back far further. A plastic is a category of materials called a polymer. These are materials comprised of long chains of molecules that can be easily found in nature because cellulose, the cellular walls of plants, comes in many forms. But while the word plastics comes from easily pliable materials, we don’t usually think of plant-based products as plastics. Instead, we think of the synthetic polymers. But documented uses go back thousands of years, especially with early uses of natural rubbers, milk proteins, gums, and shellacs. But as we rounded the corner into the mid-1800s with the rise of chemistry things picked up steam. That’s when Charles Goodyear wanted to keep tires from popping and so discovered vulcanization as a means to treat rubber. Vulcanization is when rubber is heated and mixed with other chemicals like sulphur. Then in 1869 John Wesley Hyatt looked for an alternative to natural ivory for things like billiards. He found that cotton fibers could be treated with camphor, which came from the waxy wood of camphor laurels. The substance could be shaped, dried, and then come off as most anything nature produced. When Wesley innovated plastics most camphor was extracted from trees, but today most camphor is synthetically produced from petroleum-based products, further freeing humans from needing natural materials to produce goods. Not only could we skip killing elephants but we could avoid chopping down forests to meet our needs for goods. Leo Baekeland gave us Bakelite in 1907. By then we were using other materials and the hunt was on for all kinds of materials. Shellac had been used as a moisture sealant for centuries and came from the female lac bugs in trees around India but could also be used to insulate electrical components. Baekeland created a phenol and formaldehyde solution he called Novolak but as with the advent of steel realized that he could change the temperature and how much pressure was applied to the solution that he could make it harder and more moldable - thus Bakelite became the first fully synthetic polymer. Hermann Staudinger started doing more of the academic research to explain why these reactions were happening. In 1920, he wrote a paper that looked at rubber, starch, and other polymers, explaining how their long chains of molecular units were linked by covalent bonds. Thus their high molecular weights. He would go on to collaborate with his wife Magda Voita, who was a bonanist and his polymer theories proven. And so plastics went from experimentation to science.  Scientists and experimenters alike continued to investigate uses and by 1925 there was even a magazine called Plastics. They could add filler to Bakelite and create colored plastics for all kinds of uses and started molding jewelry, gears, and other trinkets. They could heat it to 300 degrees and then inject it into molds. And so plastic manufacturing was born. As with many of the things we interact with in our modern world, use grew through the decades and there were other industries that started to merge, evolve, and diverge.  Éleuthère Irénée du Pont had worked with gunpowder in France and his family immigrated to the United States after the French Revolution. He’d worked with chemist Antoine Lavoisier while a student and started producing gunpowder in the early 1800s. That company, which evolved into the modern DuPont, always excelled in various materials sciences and through the 1920s also focused on a number of polymers. One of their employees, Wallace Carothers, invented neoprene and so gave us our first super polymer in 1928. He would go on to invent nylon as a synthetic form of silk in 1935. DuPont also brought us Teflon and insecticides in 1935. Acrylic acid went back to the mid-1800s but as people were experimenting with combining chemicals around the same time we saw British chemists John Crawford and Rowland Hill and independently German Otto Röhm develop products based on polymathy methacrylate. Here, they were creating clear, hard plastic to be used like glass. The Brits called theirs Perspex and the Germans called theirs Plexiglas when they went to market, with our friends back at DuPont creating yet another called Lucite.  The period between World War I and World War II saw advancements in nearly every science - from mechanical computing to early electrical switching and of course, plastics. The Great Depression saw a slow-down in the advancements but World War II and some of the basic research happening around the world caused an explosion as governments dumped money into build-ups. That’s when DuPont cranked out parachutes and tires and even got involved in building the Savannah Hanford plutonium plant as a part of the Manhattan Project. This took them away from things like nylon, which led to riots. We were clearly in the era of synthetics used in clothing.  Leading up to the war and beyond, every supply chain of natural goods got constrained. And so synthetic replacements for these were being heavily researched and new uses were being discovered all over the place. Add in assembly lines and we were pumping out things to bring joy or improve lives at a constant clip. BASF had been making dyes since the 1860s but chemicals are chemicals and had developed polystyrene in the 1930s and continued to grow and benefit from both licensing and developing other materials like Styropor insulating foam.    Dow Chemical had been founded in the 1800s by Herbert Henry Dow, but became an important part of the supply chain for the growing synthetics businesses, working with Corning to produce silicones and producing styrene and magnesium for light parts for aircraft. They too would help in nuclear developments, managing the Rocky Flats plutonium triggers plant and then napalm, Agent Orange, breast implants, plastic bottles, and anything else we could mix chemicals with. Expanded polystyrene led to plastics in cups, packaging, and anything else. By the 60s we were fully in a synthetic world. A great quote from 1967’s “The Graduate” was “I want to say one word to you. Just one word. Are you listening? Plastics.” The future was here. And much of that future involved injection molding machines, now more and more common. Many a mainframe was encased in metal but with hard plastics we could build faceplates out of plastic. The IBM mainframes had lots of blinking lights recessed into holes in plastic with metal switches sticking out. Turns out people get shocked less when the whole thing isn’t metal.  The minicomputers were smaller but by the time of the PDP-11 there were plastic toggles and a plastic front on the chassis. The Altair 8800 ended up looking a lot like that, but bringing that technology to the hobbyist. By the time the personal computer started to go mainstream, the full case was made of injection molding. The things that went inside computers were increasingly plastic as well. Going back to the early days of mechanical computing, gears were made out of metal. But tubes were often mounted on circuits screwed to wooden boards. Albert Hanson had worked on foil conductors that were laminated to insulating boards going back to 1903 but Charles Ducas patented electroplating circuit patterns in 1927 and Austrian Paul Eisler invented printed circuits for radio sets in the mid-1930s. John Sargrove then figured out he could spray metal onto plastic boards made of Bakelite in the late 1930s and uses expanded to proximity fuzes in World War II and then Motorola helped bring them into broader consumer electronics in the early 1950s. Printed circuit boards then moved to screen printing metallic paint onto various surfaces and Harry Rubinstein patented printing components, which helped pave the way for integrated circuits. Board lamination and etching was added to the process and conductive inks used in the creation might be etched copper, plated substrates or even silver inks as are used in RFID tags. We’ve learned over time to make things easier and with more precise machinery we were able to build smaller and smaller boards, chips, and eventually 3d printed electronics - even the Circuit Scribe to draw circuits. Doug Engelbart’s first mouse was wood but by the time Steve Jobs insisted they be mass produceable they’d been plastic for Englebart and then the Alto. Computer keyboards had evolved out of the flexowriter and so become plastic as well. Even the springs that caused keys to bounce back up eventually replaced with plastic and rubberized materials in different configurations.  Plastic is great for insulating electronics, they are poor conductors of heat, they’re light, they’re easy to mold, they’re hardy, synthetics require less than 5% of the oil we use, and they’re recyclable. Silicone, another polymer, is a term coined by the English chemist F.S. Kipping in 1901. His academic work while at University College, Nottingham would kickstart the synthetic rubber and silicone lubricant industries. But that’s not silicon. That’s an element and a tetravalent metalloid at that. Silicon was discovered in 1787 by Antoine Lavoisier. Yup the same guy that taught Du Pont. While William Shockley started off with germanium and silicon when he was inventing the transistor, it was Jack Kilby and Robert Noyce who realized how well it acted as an insulator or a semiconductor it ended up used in what we now think of as the microchip. But again, that’s not a plastic… Plastic of course has its drawbacks. Especially since we don’t consume plastics in moderation. It takes 400 to a thousand years do decompose many plastics. The rampant use in every aspect of our lives has led to animals dying after eating plastic, or getting caught in islands of it as plastic is all over the oceans and other waterways around the world. That’s 5 and a quarter trillion pieces of plastic in the ocean that weighs a combined 270,000 tons with another 8 million pieces flowing in there each and every day. In short, the overuse of plastics is hurting our environment. Or at least our inability to control our rampant consumerism is leading to their overuse. They do melt at low temperatures, which can work as a good or bad thing. When they do, they can release hazardous fumes like PCBs and dioxins. Due to many of the chemical compounds they often rely on fossil fuels and so are derived from non-renewable resources. But they’re affordable and represent a trillion dollar industry. And we can all do better at recycling - which of course requires energy and those bonds break down over time so we can’t recycle forever. Oh and the byproducts from the creation of products is downright toxic. We could argue that plastic is one of the most important discoveries in the history of humanity. That guy from The Graduate certainly would. We could argue it’s one of the worst. But we also just have to realize that our modern lives, and especially all those devices we carry around, wouldn’t be possible without plastics and other synthetic polymers. There’s a future where instead of running out to the store for certain items, we just 3d print them. Maybe we even make filament from printed materials we no longer need. The move to recyclable materials for packaging helps reduce the negative impacts of plastics. But so does just consuming less. Except devices. We obviously need the latest and greatest of each of those all the time!  Here’s the thing, half of plastics are single-purpose. Much of it is packaging like containers and wrappers. But can you imagine life without the 380 million tons of plastics the world produces a year? Just look around right now. Couldn’t tell you how many parts of this microphone, computer, and all the cables and adapters are made of it. How many couldn’t be made by anything else. There was a world without plastics for thousands of years of human civilization. We’ll look at one of those single-purpose plastic-heavy industries called fast food in an episode soon. But it’s not the plastics that are such a problem. It’s the wasteful rampant consumerism. When I take out my recycling I can’t help but think that what goes in the recycling versus compost versus garbage is as much a symbol of who I want to be as what I actually end up eating and relying on to live. And yet, I remain hopeful for the world in that these discoveries can actually end up bringing us back into harmony with the world around us without reverting to luddites and walking back all of these amazing developments like we see in the science fiction dystopian futures.
7/5/202119 minutes, 21 seconds
Episode Artwork

The Laws And Court Cases That Shaped The Software Industry

The largest global power during the rise of intellectual property was England, so the world adopted her philosophies. The US had the same impact on software law. Most case law that shaped the software industry is based on copyright law. Our first real software laws appeared in the 1970s and now have 50 years of jurisprudence to help guide us. This episode looks at the laws, supreme court cases, and some circuit appeals cases that shaped the software industry. -------- In our previous episode we went through a brief review of how the modern intellectual property laws came to be. Patent laws flowed from inventors in Venice in the 1400s, royals gave privileges to own a monopoly to inventors throughout the rest of Europe over the next couple of centuries, transferred to panels and academies during and after the Age of Revolutions, and slowly matured for each industry as technology progressed.  Copyright laws formed similarly, although they were a little behind patent laws due to the fact that they weren’t really necessary until we got the printing press. But when it came to data on a device, we had a case in 1908 we covered in the previous episode that led Congress to enact the 1909 Copyright Act.  Mechanical music boxes evolved into mechanical forms of data storage and computing evolved from mechanical to digital. Following World War II there was an explosion in new technologies, with those in computing funded heavily by US government. Or at least, until we got ourselves tangled up in a very unpopular asymmetrical war in Vietnam. The Mansfield Amendment of 1969, was a small bill in the 1970 Military Authorization Act that ended the US military from funding research that didn’t have a direct relationship to a specific military function. Money could still flow from ARPA into a program like the ARPAnet because we wanted to keep those missiles flying in case of nuclear war. But over time the impact was that a lot of those dollars the military had pumped into computing to help develop the underlying basic sciences behind things like radar and digital computing was about to dry up. This is a turning point: it was time to take the computing industry commercial. And that means lawyers. And so we got the first laws pertaining to software shortly after the software industry emerged from more and more custom requirements for these mainframes and then minicomputers and the growing collection of computer programmers. The Copyright Act of 1976 was the first major overhaul to the copyright laws since the 1909 Copyright Act. Since then, the US had become a true world power and much as the rest of the world followed the British laws from the Statute of Anne in 1709 as a template for copyright protections, the world looked on as the US developed their laws. Many nations had joined the Berne Convention for international copyright protections, but the publishing industry had exploded. We had magazines, so many newspapers, so many book publishers. And we had this whole weird new thing to deal with: software.  Congress didn’t explicitly protect software in the Copyright Act of 1976. But did add cards and tape as mediums and Congress knew this was an exploding new thing that would work itself out in the courts if they didn’t step in. And of course executives from the new software industry were asking their representatives to get in front of things rather than have the unpredictable courts adjudicate a weird copyright mess in places where technology meets copy protection. So in section 117, Congress appointed the National Commission on New Technological Uses of Copyrighted Works, or CONTU) to provide a report about software and added a placeholder in the act that empaneled them. CONTU held hearings. They went beyond just software as there was another newish technology changing the world: photocopying. They presented their findings in 1978 and recommended we define a computer program as a set of statements or instructions to be used directly or indirectly in a computer in order to bring about a certain result. They also recommended that copies be allowed if required to use the program and that those be destroyed when the user no longer has rights to the software. This is important because this is an era where we could write software into memory or start installing compiled code onto a computer and then hand the media used to install it off to someone else.  At the time the hobbyist industry was just about to evolve into the PC industry, but hard disks were years out for most of those machines. It was all about floppies. But up-market there was all kinds of storage and the righting was on the wall about what was about to come. Install software onto a computer, copy and sell the disk, move on. People would of course do that, but not legally.  Companies could still sign away their copyright protections as part of a sales agreement but the right to copy was under the creator’s control. But things like End User License Agreements were still far away. Imagine how ludicrous the idea that a piece of software if a piece of software went bad that it could put a company out of business in the 1970s. That would come as we needed to protect liability and not just restrict the right to copy to those who, well, had the right to do so. Further, we hadn’t yet standardized on computer languages. And yet companies were building complicated logic to automate business and needed to be able to adapt works for other computers and so congress looked to provide that right at the direction of CONTU as well, if only to the company doing the customizations and not allowing the software to then be resold. These were all hashed out and put into law in 1980. And that’s an important moment as suddenly the party who owned a copy was the rightful owner of a piece of software. Many of the provisions read as though we were dealing with book sellers selling a copy of a book, not dealing with the intricate details of the technology, but with technology those can change so quickly and those who make laws aren’t exactly technologists, so that’s to be expected.  Source code versus compiled code also got tested. In 1982 Williams Electronics v Artic International explored a video game that was in a ROM (which is how games were distributed before disks and cassette tapes. Here, the Third Circuit weighed in on whether if the ROM was built into the machine, if it could be copied as it was utilitarian and therefore not covered under copyright. The source code was protected but what about what amounts to compiled code sitting on the ROM. They of course found that it was indeed protected.  They again weighed in on Apple v Franklin in 1983. Here, Franklin Computer was cloning Apple computers and claimed it couldn’t clone the computer without copying what was in the ROMs, which at the time was a remedial version of what we think of as an operating system today.  Franklin claimed the OS was in fact a process or method of operation and Apple claimed it was novel. At the time the OS was converted to a binary language at runtime and that object code was a task called AppleSoft but it was still a program and thus still protected. One and two years later respectively, we got Mac OS 1 and Windows 1. 1986 saw Whelan Associates v Jaslow. Here, Elaine Whelan created a management system for a dental lab on the IBM Series One, in EDL. That was a minicomputer and when the personal computer came along she sued Jaslow because he took a BASIC version to market for the PC. He argued it was a different language and the set of commands was therefore different. But the programs looked structurally similar. She won, as while some literal elements were the same, “the copyrights of computer programs can be infringed even absent copying of the literal elements of the program.” This is where it’s simple to identify literal copying of software code when it’s done verbatim but difficult to identify non-literal copyright infringement.  But this was all professional software. What about those silly video games all the kids wanted? Well, Atari applied for a copyright for one of their games, Breakout. Here, Register of Copyrights, Ralph Oman chose not to Register the copyright. And so Atari sued, winning in the appeal. There were certainly other dental management packages on the market at the time. But the court found that “copyrights do not protect ideas – only expressions of ideas.” Many found fault with the decision and  the Second Circuit heard Computer Associates v Altai in 1992. Here, the court applied a three-step test of Abstraction-Filtration-Comparison to determine how similar products were and held that Altai's rewritten code did not meet the necessary requirements for copyright infringement. There were other types of litigation surrounding the emerging digital sphere at the time as well. The Computer Fraud and Abuse Act came along in 1986 and would be amended in 89, 94, 96, and 2001. Here, a number of criminal offenses were defined - not copyright but they have come up to criminalize activities that should have otherwise been copyright cases. And the Copyright Act of 1976 along with the CONTU findings were amended to cover the rental market came to be (much as happened with VHS tapes and Congress established provisions to cover that in 1990. Keep in mind that time sharing was just ending by then but we could rent video games over dial-up and of course VHS rentals were huge at the time. Here’s a fun one, Atari infringed on Nintendo’s copyright by claiming they were a defendant in a case and applying to the Copyright Office to get a copy of the 10NES program so they could actually infringe on their copyright. They tried to claim they couldn’t infringe because they couldn’t make games unless they reverse engineered the systems. Atari lost that one. But Sega won a similar one soon thereafter because playing more games on a Sega was fair use. Sony tried to sue Connectix in a similar case where you booted the PlayStation console using a BIOS provided by Connectix. And again, that was reverse engineering for the sake of fair use of a PlayStation people payed for. Kinda’ like jailbreaking an iPhone, right? Yup, apps that help jailbreak, like Cydia, are legal on an iPhone. But Apple moves the cheese so much in terms of what’s required to make it work so far that it’s a bigger pain to jailbreak than it’s worth. Much better than suing everyone.  Laws are created and then refined in the courts. MAI Systems Corp. v. Peak Computer made it to the Ninth Circuit Court of Appeals in 1993. This involved Eric Francis leaving MAI and joining Peak. He then loaded MAI’s diagnostics tools onto computers. MAI thought they should have a license per computer, but yet Peak used the same disk in multiple computers. The crucial change here was that the copy made, while ephemeral, was decided to be a copy of the software and so violated the copyright. We said we’d bring up that EULA though. In 1996, the Seventh Circuit found in ProCD v Zeidenberg, that the license preempted copyright thus allowing companies to use either copyright law or a license when seeking damages and giving lawyers yet another reason to answer any and all questions with “it depends.” One thing was certain, the digital world was coming fast in those Clinton years. I mean, the White House would have a Gopher page and Yahoo! would be on display at his second inauguration. So in 1998 we got the Digital Millennium Copyright Act (DMCA). Here, Congress added to Section 117 to allow for software copies if the software was required for maintenance of a computer. And yet software was still just a set of statements, like instructions in a book, that led the computer to a given result. The DMCA did have provisions to provide treatment to content providers and e-commerce providers. It also implemented two international treaties and provided remedies for anti-circumvention of copy-prevention systems since by then cracking was becoming a bigger thing. There was more packed in here. We got MAI Systems v Peak Computer reversed by law, refinement to how the Copyright Office works, modernizing audio and movie rights, and provisions to facilitate distance education. And of course the DMCA protected boat hull designs because, you know, might as well cram some stuff into a digital copyright act.  In addition to the cases we covered earlier, we had Mazer v Stein, Dymow v Bolton, and even Computer Associates v Altai, which cemented the AFC method as the means most courts determine copyright protection as it extends to non-literal components such as dialogue and images. Time and time again, courts have weighed in on what fair use is because the boundaries are constantly shifting, in part due to technology, but also in part due to shifting business models.  One of those shifting business models was ripping songs and movies. RealDVD got sued by the MPAA for allowing people to rip DVDs. YouTube would later get sued by Viacom but courts found no punitive damages could be awarded. Still, many online portals started to scan for and filter out works they could know were copy protected, especially given the rise of machine learning to aid in the process. But those were big, major companies at the time. IO Group, Inc sued Veoh for uploaded video content and the judge found Veoh was protected by safe harbor.  Safe Harbor mostly refers to the Online Copyright Infringement Liability Limitation Act, or OCILLA for short, which shields online portals and internet service providers from copyright infringement. This would be separate from Section 230, which protects those same organizations from being sued for 3rd party content uploaded on their sites. That’s the law Trump wanted overturned during his final year in office but given that the EU has Directive 2000/31/EC, Australia has the Defamation Act of 2005, Italy has the Electronic Commerce Directive 2000, and lots of other countries like England and Germany have had courts find similarly, it is now part of being an Internet company. Although the future of “big tech” cases (and the damage many claim is being done to democracy) may find it refined or limited. In 2016, Cisco sued Arista for allegedly copying the command line interfaces to manage switches. Cisco lost but had claimed more than $300 million in damages. Here, the existing Cisco command structure allowed Arista to recruit seasoned Cisco administrators to the cause. Cisco had done the mental modeling to evolve those commands for decades and it seemed like those commands would have been their intellectual property. But, Arista hadn’t copied the code.  Then in 2017, in ZeniMax vs Oculus, ZeniMax wan a half billion dollar case against Oculus for copying their software architecture.  And we continue to struggle with what copyright means as far as code goes. Just in 2021, the Supreme Court ruled in Google v Oracle America that using application programming interfaces (APIs) including representative source code can be transformative and fall within fair use, though did not rule if such APIs are copyrightable. I’m sure the CP/M team, who once practically owned the operating system market would have something to say about that after Microsoft swooped in with and recreated much of the work they had done. But that’s for another episode. And traditional media cases continue. ABS Entertainment vs CBS looked at whether digitally remastering works extended copyright. BMG vs Cox Communications challenged peer-to-peer file-sharing in safe harbor cases (not to mention the whole Napster testifying before congress thing). You certainly can’t resell mp3 files the way you could drop off a few dozen CDs at Tower Records, right? Capitol Records vs ReDigi said nope. Perfect 10 v Amazon, Goldman v Breitbart, and so many more cases continued to narrow down who and how audio, images, text, and other works could have the right to copy restricted by creators. But sometimes it’s confusing. Dr. Seuss vs ComicMix found that merging Star Trek and “Oh, the Places You’ll Go” was enough transformativeness to break the copyright of Dr Seuss, or was that the Fair Use Doctrine? Sometimes I find conflicting lines in opinions. Speaking of conflict… Is the government immune from copyright? Allen v Cooper, Governor of North Carolina made it to the Supreme Court, where they applied blanket copyright protections. Now, this was a shipwreck case but extended to digital works and the Supreme Court seemed to begrudgingly find for the state, and looked to a law as remedy rather than awarding damages. In other words, the “digital Blackbeards” of a state could pirate software at will. Guess I won’t be writing any software for the state of North Carolina any time soon! But what about content created by a state? Well, the state of Georgia makes various works available behind a paywall. That paywall might be run by a third party in exchange for a cut of the proceeds. So Public.Resource goes after anything where the edict of a government isn’t public domain. In other words, court decision, laws, and statutes should be free to all who wish to access them. The “government edicts doctrine” won in the end and so access to the laws of the nation continue to be free. What about algorithms? That’s more patent territory when they are actually copyrightable, which is rare. Gottschalk v. Benson was denied a patent for a new way to convert binary-coded decimals to numerals while Diamond v Diehr saw an algorithm to run a rubber molding machine was patentable. And companies like Intel and Broadcom hold thousands of patents for microcode for chips. What about the emergence of open source software and the laws surrounding social coding? We’ll get to the emergence of open source and the consequences in future episodes! One final note, most have never heard of the names in early cases. Most have heard of the organizations listed in later cases. Settling issues in the courts has gotten really, really expensive. And it doesn’t always go the way we want. So these days, whether it’s Apple v Samsung or other tech giants, the law seems to be reserved for those who can pay for it. Sure, there’s the Erin Brockovich cases of the world. And lady justice is still blind. We can still represent ourselves, case and notes are free. But money can win cases by having attorneys with deep knowledge (which doesn’t come cheap). And these cases drag on for years and given the startup assembly line often halts with pending legal actions, not many can withstand the latency incurred. This isn’t a “big tech is evil” comment as much as “I see it and don’t know a better rubric but it’s still a thing” kinda’ comment. Here’s something better that we’d love to have a listener take away from this episode. Technology is always changing. Laws usually lag behind technology change as (like us) they’re reactive to innovation. When those changes come, there is opportunity. Not only has the technological advancement gotten substantial enough to warrant lawmaker time, but the changes often create new gaps in markets that new entrants can leverage. Either leaders in markets adapt quickly or see those upstarts swoop in, having no technical debt and being able to pivot faster than those who previously might have enjoyed a first user advantage. What laws are out there being hashed out, just waiting to disrupt some part of the software market today?
6/13/202128 minutes, 56 seconds
Episode Artwork

Origins of the Modern Patent And Copyright Systems

Once upon a time, the right to copy text wasn’t really necessary. If one had a book, one could copy the contents of the book by hiring scribes to labor away at the process and books were expensive. Then came the printing press. Now, the printer of a work would put a book out and another printer could set their press up to reproduce the same text. More people learned to read and information flowed from the presses at the fastest pace in history.  The printing press spread from Gutenberg’s workshop in the 1440s throughout Germany and then to the rest of Europe and appearing in England when William Caxton built the first press there in 1476. It was a time of great change, causing England to retreat into protectionism, and Henry VIII tried to restrict what could be printed in the 1500s. But Parliament needed to legislate further.  England was first to establish copyright when Parliament passed the Licensing of the Press Act in 1662, which regulated what could be printed. This was more to prevent printing scandalous materials and basically gave a monopoly to The Stationers’ Company to register, print, copy, and publish books. They could enter another printer and destroy their presses. That went on for a few decades until the act was allowed to lapse in 1694 but began the 350 year journey of refining what copyright and censorship means to a modern society.  The next big step came in England when the Statute of Anne was passed in 1710. It was named for the reigning last Queen of the House of Stuart. While previously a publisher could appeal to have a work censored by others because the publisher had created it, this statute took a page out of the patent laws and granted a right of protection against copying a work for 14 years. Reading through the law and further amendments it is clear that lawmakers were thinking far more deeply about the balance between protecting the license holder of a work and how to get more books to more people. They’d clearly become less protectionist and more concerned about a literate society.  There are examples in history of granting exclusive rights to an invention from the Greeks to the Romans to Papal Bulls. These granted land titles, various rights, or a status to people. Edward the Confessor started the process of establishing the Close Rolls in England in the 1050s, where a central copy of all those granted was kept. But they could also be used to grant a monopoly, with the first that’s been found being granted by Edward III to John Kempe of Flanders as a means of helping the cloth industry in England to flourish.  Still, this wasn’t exactly an exclusive right but instead a right to emigrate. And the letters were personal and so letters patent evolved to royal grants, which Queen Elizabeth was providing in the late 1500s. That emerged out of the need for patent laws proven by Venicians in the late 1400s, when they started granting exclusive rights by law to inventions for 10 years. King Henry II of France established a royal patent system in France and over time the French Academy of Sciences was put in charge of patent right review. English law evolved and perpetual patents granted by monarchs were stifling progress. Monarchs might grant patents to raise money and so allow a specific industry to turn into a monopoly to raise funds for the royal family. James I was forced to revoke the previous patents, but a system was needed. And so the patent system was more formalized and those for inventions got limited to 14 years when the Statue of Monopolies was passed in England in 1624. The evolution over the next few decades is when we started seeing drawings added to patent requests and sometimes even required. We saw forks in industries and so the addition of medical patents, and an explosion in various types of patents requested.  They weren’t just in England. The mid-1600s saw the British Colonies issuing their own patents. Patent law was evolving outside of England as well. The French system was becoming larger with more discoveries. By 1729 there were digests of patents being printed in Paris and we still keep open listings of them so they’re easily proven in court. Given the maturation of the Age of Enlightenment, that clashed with the financial protectionism of patent laws and intellectual property as a concept emerged but borrowed from the patent institutions bringing us right back to the Statute of Anne, which established the modern Copyright system. That and the Statue of Monopolies is where the British Empire established the modern copyright and patent systems respectively, which we use globally today. Apparently they were worth keeping throughout the Age of Revolution, mostly probably because they’d long been removed from the monarchal control and handed to various public institutions. The American Revolution came and went. The French Revolution came and went. The Latin American wars of independence, revolutions throughout the 1820s , the end of Feudalism, Napoleon. But the wars settled down and a world order of sorts came during the late 1800s. One aspect of that world order was the Berne Convention, which was signed in 1886. This  established the bilateral recognition of copyrights among sovereign nations that signed onto the treaty, rather than have various nations enter into pacts between one another. Now, the right to copy works were automatically in force at creation, so authors no longer had to register their mark in Berne Convention countries. Following the Age of Revolutions, there was also an explosion of inventions around the world. Some ended up putting copyrighted materials onto reproducible forms. Early data storage. Previously we could copyright sheet music but the introduction of the player piano led to the need to determine the copyright ability of piano rolls in White-Smith Music v. Apollo in 1908. Here we saw the US Supreme Court find that these were not copies as interpreted in the US Copyright Act because only a machine could read them and they basically told congress to change the law. So Congress did. The Copyright Act of 1909 then specified that even if only a machine can use information that’s protected by copyright, the copyright protection remains. And so things sat for a hot minute as we learned first mechanical computing, which is patentable under the old rules and then electronic computing which was also patentable. Jacquard patented his punch cards in 1801. But by the time Babbage and Lovelace used them in his engines that patent had expired. And the first digital computer to get a patent was the Eckert-Mauchly ENIAC, which was filed in 1947, granted in 1964, and because there was a prior unpatented work, overturned in 1973. Dynamic RAM was patented in 1968. But these were for physical inventions. Software took a little longer to become a legitimate legal quandary. The time it took to reproduce punch cards and the lack of really mass produced software didn’t become an issue until after the advent of transistorized computers with Whirlwind, the DEC PDP, and the IBM S/360. Inventions didn’t need a lot of protections when they were complicated and it took years to build one. I doubt the inventor of the Antikythera Device in Ancient Greece thought to protect their intellectual property because they’d of likely been delighted if anyone else in the world would have thought to or been capable of creating what they created. Over time, the capabilities of others rises and our intellectual property becomes more valuable because progress moves faster with each generation. Those Venetians saw how technology and automation was changing the world and allowed the protection of inventions to provide a financial incentive to invent. Licensing the commercialization of inventions then allows us to begin the slow process of putting ideas on a commercialization assembly line.  Books didn’t need copyright until they could be mass produced and were commercially viable. That came with mass production. A writer writes, or creates intellectual property and a publisher prints and distributes. Thus we put the commercialization of literature and thoughts and ideas on an assembly line. And we began doing so far before the Industrial Revolution.  Once there were more inventions and some became capable of mass producing the registered intellectual property of others, we saw a clash in copyrights and patents. And so we got the Copyright Act of 1909. But with digital computers we suddenly had software emerging as an entire industry. IBM had customized software for customers for decades but computer languages like FORTRAN and mass storage devices that could be moved between computers allowed software to be moved between computers and sometimes entire segments of business logic moved between companies based on that software. By the 1960s, companies were marketing computer programs as a cottage industry.  The first computer program was deposited at the US Copyright Office in 1961. It was a simple thing. A tape with a computer program that had been filed by North American Aviation. Imagine the examiners looking at it with their heads cocked to the side a bit. “What do we do with this?” They hadn’t even figured it out when they got three more from General Dynamics and two more programs showed up from a student at Columbia Law.  A punched tape held a bunch of punched cards. A magnetic tape just held more punched tape that went faster. This was pretty much what those piano rolls from the 1909 law had on them. Registration was added for all five in 1964. And thus software copyright was born. But of course it wasn’t just a metallic roll that had impressions for when a player piano struck a hammer. If someone found a roll on the ground, they could put it into another piano and hit play. But the likelihood that they could put reproduce the piano roll was low. The ability to reproduce punch cards had been there. But while it likely didn’t take the same amount of time it took to reproduce a copy Plato’s Republic before the advent of the printing press, the occurrences weren’t frequent enough to encounter a likely need for adjudication. That changed with high speed punch devices and then the ability to copy magnetic tape. Contracts (which we might think of as EULAs today in a way) provided a license for a company to use software, but new questions were starting to form around who was bound to the contract and how protection was extended based on a number of factors. Thus the LA, or License Agreement part of EULA rather than just a contract when buying a piece of software.  And this brings us to the forming of the modern software legal system. That’s almost a longer story than the written history we have of early intellectual property law, so we’ll pick that up in the next episode of the podcast!
6/7/202117 minutes, 3 seconds
Episode Artwork

A History Of Text Messages In A Few More Than 160 Characters

Texts are sent and received using SMS, or Short Message Service. Due to the amount of bandwidth available on second generation networks, they were limited to 160 characters initially. You know the 140 character max from Twitter, we are so glad you chose to join us on this journey where we weave our way from the topmast of the 1800s to the skinny jeans of San Francisco with Twitter. What we want you to think about through this episode is the fact that this technology has changed our lives. Before texting we had answering machines, we wrote letters, we sent more emails but didn’t have an expectation of immediate response. Maybe someone got back to us the next day, maybe not. But now, we rely on texting to coordinate gatherings, pick up the kids, get a pin on a map, provide technical support, send links, send memes, convey feelings in ways that we didn’t do when writing letters. I mean including an animated gif in a letter meant melty peanut butter. Wait, that’s jif. Sorry. And few technologies have sprung into our every day use so quickly in the history of technology. It took generations if not 1,500 years for bronze working to migrate out of the Vinča Culture and bring an end to the Stone Age. It took a few generations if not a couple of hundred years for electricity to spread throughout the world. The rise of computing took a few generations to spread from first mechanical then to digital and then to personal computing and now to ubiquitous computing. And we’re still struggling to come to terms with job displacement and the productivity gains that have shifted humanity more rapidly than any other time including the collapse of the Bronze Age.  But the rise of cellular phones and then the digitization of them combined with globalization has put instantaneous communication in the hands of everyday people around the world. We’ve decreased our reliance on paper and transporting paper and moved more rapidly into a digital, even post-PC era. And we’re still struggling to figure out what some of this means. But did it happen as quickly as we identify? Let’s look at how we got here. Bell Telephone introduced the push button phone in 1963 to replace the rotary dial telephone that had been invented in 1891 and become a standard. And it was only a matter of time before we’d find a way to associate letters to it. Once we could send bits over devices instead of just opening up a voice channel it was only a matter of time before we’d start sending data as well. Some of those early bits we sent were things like typing our social security number or some other identifier for early forms of call routing. Heck the fax machine was invented all the way back in 1843 by a Scottish inventor called Alexander Bain.  So given that we were sending different types of data over permanent and leased lines it was only a matter of time before we started doing so over cell phones.  The first cellular networks were analog in what we now think of as first generation, or 1G. GSM, or Global System for Mobile Communications is a standard that came out of the European Telecommunications Standards Institue and started getting deployed in 1991. That became what we now think of as 2G and paved the way for new types of technologies to get rolled out. The first text message simply said “Merry Christmas” and was sent on December 3rd, 1992. It was sent to Richard Jarvis at Vodafone by Neil Papworth. As with a lot of technology it was actually thought up eight years earlier by Bernard Ghillabaert and Friedhelm Hillebrand. From there, the use cases moved to simply alerting devices of various statuses, like when there was a voice mail. These days we mostly use push notification services for that.  To support using SMS for that, carriers started building out SMS gateways and by 1993 Nokia was the first cell phone maker to actually support end-users sending text messages. Texting was expensive at first, but adoption slowly increased. We could text in the US by 1995 but cell phone subscribers were sending less than 6 texts a year on average. But as networks grew and costs came down, adoption increased up to a little over one a day by the year 2000.  Another reason adoption was slow was because using multi-tap to send a message sucked. Multi-tap was where we had to use the 10-key pad on a device to type out messages. You know, ABC are on a 2 key so the first type you tap two it’s the number the next time it’s an A, the next a B, the next a C. And the 3 key is D, E, and F. The 4 is G, H, and I and the 5 is J, K, and L. The 6 is M, N, and O and the 7 is P, Q, R, and S. The 8 is T, U, and V and the 9 is W, X, Y, and Z. This layout goes back to old bell phones that had those letters printed under the numbers. That way if we needed to call 1-800-PODCAST we could map which letters went to what.  A small company called Research in Motion introduced an Inter@active Pager in 1996 to do two-way paging. Paging services went back decades. My first was a SkyTel, which has its roots in Mississippi when John N Palmer bought a 300 person paging company using an old-school radio paging service. That FCC license he picked up evolved to more acquisitions through Alabama, Loisiana, New York and by the mid-80s growing nationally to 30,000 subscribers in 1989 and over 200,000 less than four years later. A market validated, RIM introduced the BlackBerry on the DataTAC network in 2002, expanding from just text to email, mobile phone services, faxing, and now web browsing. We got the Treo the same year. But that now iconic Blackberry keyboard. Nokia was the first cellular device maker to make a full keyboard for their Nokia 9000i Communicator in 1997, so it wasn’t an entirely new idea. But by now, more and more people were thinking of what the future of Mobility would look like. The 3rd Generation Partnership Project, or 3GPP was formed in 1998 to dig into next generation networks. They began as an initiative at Nortel and AT&T but grew to include NTT DoCoMo, British Telecom, BellSouth, Ericsson, Telnor, Telecom Italia, and France Telecom - a truly global footprint. With a standards body in place, we could move faster and they began planning the roadmap for 3G and beyond (at this point we’re on 5G).  Faster data transfer rates let us do more. We weren’t just sending texts any more. MMS, or Multimedia Messaging Service was then introduced and use grow to billions and then hundreds of millions of photos sent encoded using technology like what we do with MIME for multimedia content on websites. At this point, people were paying a fee for every x number of messages and ever MMS. Phones had cameras now so in a pre-Instagram world this was how we were to share them. Granted they were blurry by modern standards, but progress. Devices became more and more connected as data plans expanded to eventually often be unlimited. But SMS was still slow to evolve in a number of ways. For example, group chat was not really much of a thing. That is, until 2006 when a little company called Twitter came along to make it easy for people to post a message to their friends. Initially it worked over text message until they moved to an app. And texting was used by some apps to let users know there was data waiting for them. Until it wasn’t. Twilio was founded in 2008 to make it easy for developers to add texting to their software. Now every possible form of text integration was as simple as importing a framework. Apple introduced the Apple Push Notification service, or APNs in 2009. By then devices were always connected to the Internet and the send and receive for email and other apps that were fine on desktops were destroying battery life. APNs then allowed developers to build apps that could only establish a communication channel when they had data. Initially we used 256 bytes in push notifications but due to the popularity and different implementation needs, notifications could grow to 2 kilobytes in 2015 and moved to an HTTP/2 interface and a 4k payload in 2015. This is important because it paved the way for iChat, now called iMessage or just Messages - and then other similar services for various platforms that moved instant messaging off SMS and over to the vendor who builds a device rather than using SMS or MMS messaging.  Facebook Messenger came along in 2011, and now the kids use Instagram messaging, Snapchat, Signal or any number of other messaging apps. Or they just text. It’s one of a billion communications tools that also include Discord, Slack, Teams, LinkedIn, or even the in-game options in many a game. Kinda’ makes restricting communications a bit of a challenge at this point and restricting spam.  My kid finishes track practice early. She can just text me. My dad can’t make it to dinner. He can just text me. And of course I can get spam through texts. And everyone can message me on one of about 10 other apps on my phone. And email. On any given day I receive upwards of 300 messages, so sometimes it seems like I could just sit and respond to messages all day every day and still never be caught up. And get this - we’re better for it all. We’re more productive, we’re more well connected, and we’re more organized. Sure, we need to get better at having more meaningful reactions when we’re together in person. We need to figure out what a smaller, closer knit group of friends is like and how to be better at being there for them rather than just sending a sad face in a thread where they’re indicating their pain.  But there’s always a transition where we figure out how to embrace these advances in technology. There are always opportunities in the advancements and there are always new evolutions built atop previous evolutions. The rate of change is increasing. The reach of change is increasing. And the speed changes propagate are unparalleled today. Some will rebel against changes, seeking solace in older ways. It’s always been like that - the Amish can often be seen on a buggy pulled by a horse so a television or phone capable of texting would certainly be out of the question. Others embrace technology faster than some of us are ready for. Like when I realized some people had moved away from talking on phones and were pretty exclusively texting. Spectrums. I can still remember picking up the phone and hearing a neighbor on with a friend. Party lines were still a thing in Dahlonega, Georgia when I was a kid. I can remember the first dedicated line and getting in trouble for running up a big long distance bill. I can remember getting our first answering machine and changing messages on it to be funny. Most of that was technology that moved down market but had been around for a long time. The rise of messaging on the cell phone then smart phone though - that was a turning point that started going to market in 1993 and within 20 years truly revolutionized human communication. How can we get messages faster than instant? Who knows, but I look forward to finding out. 
5/16/202116 minutes, 9 seconds
Episode Artwork

Project Xanadu

Java, Ruby, PHP, Go. These are web applications that dynamically generate code then interpreted as a file by a web browser. That file is rarely static these days and the power of the web is that an app or browser can reach out and obtain some data, get back some xml or json or yaml, and provide an experience to a computer, mobile device, or even embedded system. The web is arguably the most powerful, transformational technology in the history of technology. But the story of the web begins in philosophies that far predate its inception. It goes back to a file, which we can think of as a document, on a computer that another computer reaches out to and interprets. A file comprised of hypertext. Ted Nelson coined the term hypertext. Plenty of others put the concepts of linking objects into the mainstream of computing. But he coined the term that he’s barely connected to in the minds of many.  Why is that? Tim Berners-Lee invented the World Wide Web in 1989. Elizabeth Feinler developed a registry of names that would evolve into DNS so we could find computers online and so access those web sites without typing in impossible to remember numbers. Bob Kahn and Leonard Kleinrock were instrumental in the Internet Protocol, which allowed all those computers to be connected together, providing the schemes for those numbers. Some will know these names; most will not.  But a name that probably doesn’t come up enough is Ted Nelson. His tale is one of brilliance and the early days of computing and the spread of BASIC and an urge to do more. It’s a tale of the hacker ethic. And yet, it’s also a tale of irreverence - to be used as a warning for those with aspirations to be remembered for something great. Or is it? Steve Jobs famously said “real artists ship.” Ted Nelson did ship. Until he didn’t. Let’s go all the way back to 1960, when he started Project Xanadu. Actually, let’s go a little further back first.  Nelson was born to TV directory Ralph Nelson and Celeste Holm, who won an Academy Award for her role in Gentleman’s Agreement in 1947 and took home another pair of nominations through her career, and for being the original Ado Annie in Oklahoma. His dad worked on The Twilight Zone - so of course he majored in philosophy at Swarthmore College and then went off to the University of Chicago and then Harvard for graduate school, taking a stab at film after he graduated. But he was meant for an industry that didn’t exist yet but would some day eclipse the film industry: software.  While in school he got exposed to computers and started to think about this idea of a repository of all the world’s knowledge. And it’s easy to imagine a group of computing aficionados sitting in a drum circle, smoking whatever they were smoking, and having their minds blown by that very concept. And yet, it’s hard to imagine anyone in that context doing much more. And yet he did. Nelson created Project Xanadu in 1960. As we’ll cover, he did a lot of projects during the remainder of his career. The Journey is what is so important, even if we never get to the destination. Because sometimes we influence the people who get there. And the history of technology is as much about failed or incomplete evolutions as it is about those that become ubiquitous.  It began with a project while he was enrolled in Harvard grad school. Other word processors were at the dawn of their existence. But he began thinking through and influencing how they would handle information storage and retrieval.  Xanadu was supposed to be a computer network that connected humans to one another. It was supposed to be simple and a scheme for world-wide electronic publishing. Unlike the web, which would come nearly three decades later, it was supposed to be bilateral, with broken links self-repairing, much as nodes on the ARPAnet did. His initial proposal was a program in machine language that could store and display documents. Being before the advent of Markdown, ePub, XML, PDF, RTF, or any of the other common open formats we use today, it was rudimentary and would evolve over time. Keep in mind. It was for documents and as Nelson would say later, the web - which began as a document tool, was a fork of the project.  The term Xanadu was borrowed from Samuel Taylor Coleridge’s Kubla Khan, itself written after some opium fueled dreams about a garden in Kublai Khan’s Shangdu, or Xanadu.In his biography, Coleridge explained the rivers in the poem supply “a natural connection to the parts and unity to the whole” and he said a “stream, traced from its source in the hills among the yellow-red moss and conical glass-shaped tufts of bent, to the first break or fall, where its drops become audible, and it begins to form a channel.”  Connecting all the things was the goal and so Xanadu was the name. He gave a talk and presented a paper called “A File Structure for the Complex, the Changing and the Indeterminate” at the Association for Computing Machinery in 1965 that laid out his vision. This was the dawn of interactivity in computing. Digital Equipment had launched just a few years earlier and brought the PDP-8 to market that same year. The smell of change was in the air and Nelson was right there.  After that, he started to see all these developments around the world. He worked on a project at Brown University to develop a word processor with many of his ideas in it. But the output of that project, as with most word processors since - was to get things printed. He believed content was meant to be created and live its entire lifecycle in the digital form. This would provide perfect forward and reverse citations, text enrichment, and change management. And maybe if we all stand on the shoulders of giants, it would allow us the ability to avoid rewriting or paraphrasing the works of others to include them in own own writings. We could do more without that tedious regurgitation.  He furthered his counter-culture credentials by going to Woodstock in 1969. Probably not for that reason, but it happened nonetheless. And he traveled and worked with more and more people and companies, learning and engaging and enriching his ideas. And then he shared them.  Computer Lib/Dream Machines was a paperback book. Or two. It had a cover on each side. Originally published in 1974, it was one of the most important texts of the computer revolution. Steven Levy called it an epic. It’s rare to find it for less than a hundred bucks on eBay at this point because of how influential it was and what an amazing snapshot in time it represents.  Xanadu was to be a hypertext publishing system in the form of Xanadocs, or files that could be linked to from other files. A Xanadoc used Xanalinks to embed content from other documents into a given document. These spans of text would become transclusions and change in the document that included the content when they changed in the live document. The iterations towards working code were slow and the years ticked by. That talk in 1965 gave way to the 1970s, then 80s. Some thought him brilliant. Others didn’t know what to make of it all. But many knew of his ideas for hypertext and once known it became deterministic. Byte Magazine published many of his thoughts in 1988 called “Managing Immense Storage” and by then the personal computer revolution had come in full force. Tim Berners-Lee put the first node of the World Wide Web online the next year, using a protocol they called Hypertext Transfer Protocol, or http. Yes, the hypertext philosophy was almost a means of paying homage to the hard work and deep thinking Nelson had put in over the decades. But not everyone saw it as though Nelson had made great contributions to computing.  “The Curse of Xanadu” was an article published in Wired Magazine in 1995. In the article, the author points out the fact that the web had come along using many of the ideas Nelson and his teams had worked on over the years but actually shipped - whereas Nelson hadn’t. Once shipped, the web rose in popularity becoming the ubiquitous technology it is today. The article looked at Xanadu as vaporware. But there is a deeper, much more important meaning to Xanadu in the history of computing.  Perhaps inspired by the Wired article, the group released an incomplete version of Xanadu in 1998. But by then, other formats - including PDF which was invented in 1993 and .doc for Microsoft Word, were the primary mechanisms we stored documents and first gopher and then the web were spreading to interconnect humans with content. https://www.youtube.com/watch?v=72M5kcnAL-4 The Xanadu story isn’t a tragedy. Would we have had hypertext as a part of Douglas Engelbart’s oNLine System without it? Would we have object-oriented programming or later the World Wide Web without it? The very word hypertext is almost an homage, even if they don’t know it, to Nelson’s work. And the look and feel of his work lives on in places like GitHub, whether directly influenced or not, where we can see changes in code side-by-side with actual production code, changes that are stored and perhaps rolled back forever. Larry Tessler coined the term Cut and Paste. While Nelson calls him a friend in Werner Herzog’s Lo and Behold, Reveries of the Connected World, he also points out that Tessler’s term is flawed. And I think this is where we as technologists have to sometimes trim down our expectations of how fast evolutions occur. We take tiny steps because as humans we can’t keep pace with the rapid rate of technological change. We can look back and see a two steps forward and one step back approach since the dawn of written history. Nelson still doesn’t think the metaphors that harken back to paper have any place in the online written word.  Here’s another important trend in the history of computing. As we’ve transitioned to more and more content living online exclusively, the content has become diluted. One publisher I wrote online pieces for asked that they all be +/- 700 words and asked that paragraphs be no more than 4 sentences long (preferably 3) and the sentences should be written at about a 5th or 6th grade level. Maybe Nelson would claim that this de-evolution of writing is due to search engine optimization gamifying the entirety of human knowledge and that a tool like Xanadu would have been the fix. After all, if we could borrow the great works of others we wouldn’t have to paraphrase them. But I think as with most things, it’s much more nuanced than that.  Our always online, always connected brains can only accept smaller snippets. So that’s what we gravitate towards. Actually, we have plenty of capacity for whatever we actually choose to immerse ourselves into. But we have more options than ever before and we of course immerse ourselves into video games or other less literary pursuits. Or are they more literary? Some generations thought books to be dangerous. As do all oppressors. So who am I to judge where people choose to acquire knowledge or what kind they indulge themselves in. Knowledge is power and I’m just happy they have it. And they have it in part because others were willing to water own the concepts to ship a product. Because the history of technology is about evolutions, not revolutions. And those often take generations. And Nelson is responsible for some of the evolutions that brought us the ht in http or html. And for that we are truly grateful! As with the great journey from Lord of the Rings, rarely is greatness found alone. The Xanadu adventuring party included Cal Daniels, Roger Gregory, Mark Miller, Stuart Greene, Dean Tribble, Ravi Pandya, became a part of Autodesk in the 80s, got rewritten in Smalltalk, was considered a rival to the web, but really is more of an evolutionary step on that journey. If anything it’s a divergence then convergence to and from Vannevar Bush’s Memex. So let me ask this as a parting thought? Are the places you are not willing to sacrifice any of your core designs or beliefs worth the price being paid? Are they worth someone else ending up with a place in the history books where (like with this podcast) we oversimplify complex topics to make them digestible? Sometimes it’s worth it. In no way am I in a place to judge the choices of others. Only history can really do that - but when it happens it’s usually an oversimplification anyways… So the building blocks of the web lie in irreverence - in hypertext. And while some grew out of irreverence and diluted their vision after an event like Woodstock, others like Nelson and his friend Douglas Englebart forged on. And their visions didn’t come with commercial success. But as an integral building block to the modern connected world today they represent as great a mind as practically anyone else in computing. 
5/13/202119 minutes
Episode Artwork

An Abridged History Of Instagram

This was a hard episode to do. Because telling the story of Instagram is different than explaining the meaning behind it. You see, on the face of it - Instagram is an app to share photos. But underneath that it’s much more. It’s a window into the soul of the Internet-powered culture of the world. Middle schoolers have always been stressed about what their friends think. It’s amplified on Instagram. People have always been obsessed with and copied celebrities - going back to the ages of kings. That too is on Instagram. We love dogs and cute little weird animals. So does Instagram.  Before Instagram, we had photo sharing apps. Like Hipstamatic. Before Instagram, we had social networks - like Twitter and Facebook. How could Instagram do something different and yet, so similar? How could it offer that window into the world when the lens photos are snapped with are as though through rose colored glasses? Do they show us reality or what we want reality to be? Could it be that the food we throw away or the clothes we donate tell us more about us as humans than what we eat or keep? Is the illusion worth billions of dollars a year in advertising revenue while the reality represents our repressed shame? Think about that as we go through this story. If you build it, they will come. Everyone who builds an app just kinda’ automatically assumes that throngs of people will flock to the App Store, download the app, and they will be loved and adored and maybe even become rich. OK, not everyone thinks such things - and with the number of apps on the stores these days, the chances are probably getting closer to those that a high school quarterback will play in the NFL. But in todays story, that is exactly what happened.  And Kevin Systrom had already seen it happen. He was offered a job as one of the first employees at Facebook while still going to Stanford. That’ll never be a thing. Then while on an internship he was asked to be one of the first Twitter employees. That’ll never be a thing either. But they were things, obviously! So in 2010, Systrom started working on an app he called Burbn and within two years sold the company, then called Instagram for one billion dollars. In doing so he and his co-founder Mike Krieger helped forever changing the deal landscape for mergers and acquisitions of apps, and more profoundly giving humanity lenses with which to see a world we want to see - if not reality. Systrom didn’t have a degree in computer science. In fact, he taught himself to code after working hours, then during working hours, and by osmosis through working with some well-known founders.  Burbn was an app to check in and post plans and photos. It was written in HTML5 and in a Cinderella story, he was able to raise half a million dollars in funding from Baseline Ventures and Andreesen Horowitz, bringing in Mike Krieger as a co-founder.  At the time, Hipstamatic was the top photo manipulation and filtering app. Given that the iPhone came with a camera on-par (if not better) than most digital point and shoots at the time, the pair re-evaluated the concept and instead leaned further into photo sharing, while still maintaining the location tagging. The original idea was to swipe right and left, as we do in apps like Tinder. But instead they chose to show photos in chronological order and used a now iconic 1:1 aspect ratio, or the photos were square, so there was room on the screen to show metadata and a taste of the next photo - to keep us streaming. The camera was simple, like the Holga camera Systrom had been given while stying abroad when at Stanford. That camera made pictures a little blurry and in an almost filtered way made them loo almost artistic.  After System graduated from Stanford in 2006, he worked at Google, then NextStop, and then got the bug to make his own app. And boy did he. One thing though, even his wife Nicole didn’t think she could take good photos having seen those from a friend of Systrom’s. He said the photos were so good because the filters. And so we got the first filter, X-Pro 2, so she could take great photos on the iPhone 3G.  Krieger shared the first post on Instagram on July 16, 2010 and Systrom followed up within a few hours with a picture of a dog. The first of probably a billion dog photos (including a few of my own). And they officially published Instagram on the App Store in October of 2010. After adding more and more filters, Systrom and Krieger closed in on one of the greatest growth hacks of any app: they integrated with Facebook, Twitter, and Foursquare so you could take the photo in Instagram and shoot it out to one of those apps - or all three. At the time Facebook was more of a browser tool. Few people used the mobile app. And for those that did try and post photos on Facebook, doing so was laborious, using a mobile camera roll in the app and taking more steps than needed. Instagram became the perfect glue to stitch other apps together. And rather than always needing to come up with something witty to say like on Twitter, we could just point the camera on our phone at something and hit a button.  The posts had links back to the photo on Instagram. They hit 100,000 users in the first week and a million users by the end of the year. Their next growth hack was to borrow the hashtag concept from Twitter and other apps, which they added in January of 2011. Remember how Systrom interned at Odeo and turned down the offer to go straight to Twitter after college? Twitter didn’t have photo sharing at the time, but Twitter co-founder Jack Dorsey had showed System plenty of programming techniques and the two stayed in touch. He became an angel investor in a $7 million Series A and the first real influencer on the platform, sending that link to every photo to all of his Twitter followers every time he posted. The growth continued. June, 2011 they hit 5 million users, and doubled to 10 million by September of 2011. I was one of those users, posting the first photo to @krypted in the fall - being a nerd it was of the iOS 5.0.1 update screen and according to the lone comment on the photo my buddy @acidprime apparently took the same photo.  They spent the next few months just trying to keep the servers up and running and released an Android of the app in April of 2012, just a couple of days before taking on $50 million dollars in venture capital. But that didn’t need to last long - they sold the company to Facebook for a billion dollars a few days later, effectively doubling each investor in that last round of funding and shooting up to 50 million users by the end of the month.  At 13 employees, that’s nearly $77 million dollars per employee. Granted, much of that went to Systrom and the investors. The Facebook acquisition seemed great at first. Instagram got access to bigger resources than even a few more rounds of funding would have provided.  Facebook helped them scale up to 100 million users within a year and following Facebook TV, and the brief but impactful release of Vine at Twitter, Instagram added video sharing, photo tagging, and the ability to add links in 2013.  Looking at a history of their feature releases, they’re slow and steady and probably the most user-centered releases I’ve seen. And in 2013, they grew to 150 million users, proving the types of rewards that come from doing so.  With that kind of growth it might seem that it can’t last forever - and yet on the back of new editing tools, a growing team, and advertising tools, they managed to hit a staggering 300 million users in 2014. While they released thoughtful, direct, human sold advertising before, they opened up the ability to buy ads to all advertisers, piggy backing on the Facebook ad selling platform in 2015. That’s the same year they introduced Boomerang, which looped photos in forward and reverse. It was cute for a hot minute.  2016 saw the introduction of analytics that included demographics, impressions, likes, reach, and other tools for businesses to track performance not only of ads, but of posts. As with many tools, it was built for the famous influencers that had the ear of the founders and management team - and made available to anyone. They also introduced Instagram Stories, which was a huge development effort and they owned that they copied it from Snapchat - a surprising and truly authentic move for a Silicon Valley startup. And we could barely call them a startup any longer, shooting over half a billion users by the middle of the year and 600 million by the end of the year.  That year, they also brought us live video, a Windows client, and one of my favorite aspects with a lot of people posting in different languages, they could automatically translate posts.  But something else happened in 2016. Donald Trump was elected to the White House. This is not a podcast about politics but it’s safe to say that it was one of the most divisive elections in recent US history. And one of the first where social media is reported to have potentially changed the outcome. Disinformation campaigns from foreign actors combined with data illegally obtained via Cambridge Analytica on the Facebook network, combined with increasingly insular personal networks and machine learning-driven doubling down on only seeing things that appealed to our world view led to many being able to point at networks like Facebook and Twitter as having been party to whatever they thought the “other side” in an election had done wrong.  Yet Instagram was just a photo sharing site. They put the users at the center of their decisions. They promoted the good things in life. While Zuckerberg claimed that Facebook couldn’t have helped change any outcomes and that Facebook was just an innocent platform that amplified human thoughts - Systrom openly backed Hillary Clinton. And yet, even with disinformation spreading on Instagram, they seemed immune from accusations and having to go to Capital Hill to be grilled following the election. Being good to users apparently has its benefits.  However, some regulation needed to happen. 2017, the Federal Trade Commission steps in to force influencers to be transparent about their relationship with advertisers - Instagram responded by giving us the ability to mark a post as sponsored. Still, Instagram revenue spiked over 3 and a half billion dollars in 2017. Instagram revenue grew past 6 billion dollars in 2018. Systrom and Krieger stepped away from Instagram that year. It was now on autopilot.  Although I think all chief executives have a  Instagram revenue shot over 9 billion dollars in 2019. In those years they released IGTV and tried to get more resources from Facebook, contributing far more to the bottom line than they took.  2020 saw Instagram ad revenue close in on 13.86 billion dollars with projected 2021 revenues growing past 18 billion. In The Picture of Dorian Gray from 1890, Lord Henry describes the impact of influence as destroying our genuine and true identity, taking away our authentic motivations, and as Shakespeare would have put it - making us servile to the influencer. Some are famous and so become influencers on the product naturally, like musicians, politicians, athletes, and even the Pope. . Others become famous due to getting showcased by the @instagram feed or some other prominent person. These influencers often stage a beautiful life and to be honest, sometimes we just need that as a little mind candy. But other times it can become too much, forcing us to constantly compare our skin to doctored skin, our lifestyle to those who staged their own, and our number of friends to those who might just have bought theirs. And seeing this obvious manipulation gives some of us even more independence than we might have felt before. We have a choice: to be or not to be.  The Instagram story is one with depth. Those influencers are one of the more visible aspects, going back to the first that posted sponsored photos from Snoop Dogg. And when Mark Zuckerberg decided to buy the company for a billion dollars, many thought he was crazy. But once they turned on the ad revenue machine, which he insisted Systrom wait on until the company had enough users, it was easy to go from 3 to 6 to 9 to over 13 and now likely over 18 billion dollars. That’s a greater than 30:1 return on investment, helping to prove that such lofty acquisitions aren’t crazy.  It’s also a story of monopoly, or at least of suspected monopolies. Twitter tried to buy Instagram and Systrom claims to have never seen a term sheet with a legitimate offer. Then Facebook swooped in and helped fast-track regulatory approval of the acquisition. With the acquisition of WhatsApp, Facebook owns four of the top 6 social media sites, with Facebook, WhatsApp, Facebook Messenger, and Instagram all over a billion users and YouTube arguably being more of a video site than a true social network. And they tried to buy Snapchat - only the 17th ranked network.  More than 50 billion photos have been shared through Instagram. That’s about a thousand a second. Many are beautiful...
4/24/202121 minutes, 16 seconds
Episode Artwork

Before the iPhone Was Apple's Digital Hub Strategy

Steve Jobs returned to Apple in 1996. At the time, most people had a digital camera, like the Canon Elph that was released that year and maybe a digital video camera and probably a computer and about 16% of Americans had a cell phone at the time. Some had a voice recorder, a Diskman, some in the audio world had a four track machine. Many had CD players and maybe even a laser disk player.  But all of this was changing. Small, cheap microprocessors were leading to more and more digital products. The MP3 was starting to trickle around after being patented in the US that year. Netflix would be founded the next year, as DVDs started to spring up around the world. Ricoh, Polaroid, Sony, and most other electronics makers released digital video cameras. There were early e-readers, personal digital assistants, and even research into digital video recorders that could record your favorite shows so you could watch them when you wanted. In other words we were just waking up to a new, digital lifestyle. But the industries were fragmented.  Jobs and the team continued the work begun under Gil Amelio to reduce the number of products down from 350 to about a dozen. They made products that were pretty and functional and revitalized Apple. But there was a strategy that had been coming together in their minds and it centered around digital media and the digital lifestyle. We take this for granted today, but mostly because Apple made it ubiquitous.  Apple saw the iMac as the centerpiece for a whole new strategy. But all this new type of media and the massive files needed a fast bus to carry all those bits. That had been created back in 1986 and slowly improved on one the next few years in the form of IEEE 1394, or Firewire. Apple started it - Toshiba, Sony, Panasonic, Hitachi, and others helped bring it to device they made. Firewire could connect 63 peripherals at 100 megabits, later increased to 200 and then 400 before increasing to 3200. Plenty fast enough to transfer those videos, songs, and whatever else we wanted. iMovie was the first of the applications that fit into the digital hub strategy. It was originally released in 1999 for the iMac DV, the first iMac to come with built-in firewire. I’d worked on Avid and SGI machines dedicated to video editing at the time but this was the first time I felt like I was actually able to edit video. It was simple, could import video straight from the camera, allow me to drag clips into a timeline and then add some rudimentary effects. Simple, clean, and with a product that looked cool. And here’s the thing, within a year Apple made it free. One catch. You needed a Mac. This whole Digital Hub Strategy idea was coming together. Now as Steve Jobs would point out in a presentation about the Digital Hub Strategy at Macworld 2001, up to that point, personal computers had mainly been about productivity. Automating first the tasks of scientists, then with the advent of the spreadsheet and databases, moving into automating business and personal functions. A common theme in this podcast is that what drives computing is productivity, telemetry, and quality of life. The telemetry gains came with connecting humanity through the rise of the internet in the later 1990s. But these new digital devices were what was going to improve our quality of life. And for anyone that could get their hands on an iMac they were now doing so. But it still felt like a little bit of a closed ecosystem.  Apple released a tool for making DVDs in 2001 for the Mac G4, which came with a SuperDrive, or Apple’s version of an optical drive that could read and write CDs and DVDs. iDVD gave us the ability to add menus, slideshows (later easily imported as Keynote presentations when that was released in 2003), images as backgrounds, and more. Now we could take those videos we made and make DVDs that we could pop into our DVD player and watch. Families all over the world could make their vacation look a little less like a bunch of kids fighting and a lot more like bliss. And for anyone that needed more, Apple had DVD Studio Pro - which many a film studio used to make the menus for movies for years. They knew video was going to be a thing because going back to the 90s, Jobs had tried to get Adobe to release Premiere for the iMac. But they’d turned him down, something he’d never forget. Instead, Jobs was able to sway Randy Ubillos to bring a product that a Macromedia board member had convinced him to work on called Key Grip, which they’d renamed to Final Cut. Apple acquired the source code and development team and released it as Final Cut Pro in 1999. And iMovie for the consumer and Final Cut Pro for the professional turned out to be a home run. But another piece of the puzzle was coming together at about the same time. Jeff Robbin, Bill Kincaid, and Dave Heller built a tool called SoundJam in 1998. They had worked on the failed Copeland project to build a new OS at Apple and afterwards, Robbin made a great old tool (that we might need again with the way extensions are going) called Conflict Catcher while Kincaid worked on the drivers for a MP3 player called the Diamond Rio. He saw these cool new MP3 things and tools like Winamp, which had been released in 1997, so decided to meet back up with Robbin for a new tool, which they called SoundJam and sold for $50.  Just so happens that I’ve never met anyone at Apple that didn’t love music. Going back to Jobs and Wozniak. So of course they would want to do something in digital music. So in 2000, Apple acquired SoundJam and the team immediately got to work stripping out features that were unnecessary. They wanted a simple aesthetic. iMovie-esque, brushed metal, easy to use. That product was released in 2001 as iTunes. iTunes didn’t change the way we consumed music.That revolution was already underway.  And that team didn’t just add brushed metal to the rest of the operating system. It had begun with QuickTime in 1991 but it was iTunes through SoundJam that had sparked brushed metal.  SoundJam gave the Mac music visualizers as well. You know, those visuals on the screen that were generated by sound waves from music we were listening to. And while we didn’t know it yet, would be the end of software coming in physical boxes. But something else big. There was another device coming in the digital hub strategy. iTunes became the de facto tool used to manage what songs would go on the iPod, released in 2001 as well. That’s worthy of its own episode which we’ll do soon.  You see, another aspect about SoundJam is that users could rip music off of CDs and into MP3s. The deep engineering work done to get the codec into the system survives here and there in the form of codecs accessible using APIs in the OS. And when combined with spotlight to find music it all became more powerful to build playlists, embed metadata, and listen more insightfully to growing music libraries. But Apple didn’t want to just allow people to rip, find, sort, and listen to music. They also wanted to enable users to create music. So in 2002, Apple also acquired a company called Emagic. Emagic would become Logic Pro and Gerhard Lengeling would in 2004 release a much simpler audio engineering tool called Garage Band.  Digital video and video cameras were one thing. But cheap digital point and shoot cameras were everwhere all of a sudden. iPhoto was the next tool in the strategy, dropping in 2002 Here, we got a tool that could import all those photos from our cameras into a single library. Now called Photos, Apple gave us a taste of the machine learning to come by automatically finding faces in photos so we could easily make albums. Special services popped up to print books of our favorite photos. At the time most cameras had their own software to manage photos that had been developed as an after-thought. iPhoto was easy, worked with most cameras, and was very much not an after-thought.  Keynote came in 2003, making it easy to drop photos into a presentation and maybe even iDVD. Anyone who has seen a Steve Jobs presentation understands why Keynote had to happen and if you look at the difference between many a Power Point and Keynote presentation it makes sense why it’s in a way a bridge between the making work better and doing so in ways we made home better.  That was the same year that Apple released the iTunes Music Store. This seemed like the final step in a move to get songs onto devices. Here, Jobs worked with music company executives to be able to sell music through iTunes - a strategy that would evolve over time to include podcasts, which the moves effectively created, news, and even apps - as explored on the episode on the App Store. And ushering in an era of creative single-purpose apps that drove down the cost and made so much functionality approachable for so many.  iTunes, iPhoto, and iMovie were made to live together in a consumer ecosystem. So in 2003, Apple reached that point in the digital hub strategy where they were able to take our digital life and wrap them up in a pretty bow. They called that product iLife - which was more a bundle of these services, along with iDVD and Garage Band. Now these apps are free but at the time the bundle would set you back a nice, easy, approachable $49.  All this content creation from the consumer to the prosumer to the professional workgroup meant we needed more and more storage. According to the codec, we could be running at hundreds of megabytes per second of content. So Apple licensed the StorNext File System in 2004 to rescue a company called ADIC and release a 64-bit clustered file system over fibre channel. Suddenly all that new high end creative content could be shared in larger and larger environments. We could finally have someone cutting a movie in Final Cut then hand it off to someone else to cut without unplugging a firewire drive to do it. Professional workflows in a pure-Apple ecosystem were a thing.  Now you just needed a way to distribute all this content. So iWeb in 2004, which allowed us to build websites quickly and bring all this creative content in. Sites could be hosted on MobileMe or files uploaded to a web host via FTP. Apple had dabbled in web services since the 80s with AppleLink then eWorld then iTools, .Mac, and MobileMe, the culmination of the evolutions of these services now referred to as iCloud.  And iCloud now syncs documents and more. Pages came in 2005, Numbers came in 2007, and they were bundled with Keynote to become Apple iWork, allowing for a competitor of sorts to Microsoft Office. Later made free and ported to iOS as well. iCloud is a half-hearted attempt at keeping these synchronized between all of our devices.  Apple had been attacking the creative space from the bottom with the tools in iLife but at the top as well. Competing with tools like Avid’s Media Composer, which had been around for the Mac going back to 1989, Apple bundled the professional video products into a single suite called Final Cut Studio. Here, Final Cut Pro, Motion, DVD Studio Pro, Soundtrack Pro, Color (obtained when Apple acquired SiliconColor and renamed it from FinalTouch), Compressor, Cinema Tools, and Qmaster for distributing the processing power for the above tools came in one big old box. iMovie and Garage Band for the consumer market and Final Cut Studio and Logic for the prosumer to professional market. And suddenly I was running around the world deploying Xsan’s into video shops, corporate taking head editing studios, and ad agencies Another place where this happened was with photos. Aperture was released in 2005 and  offered the professional photographer tools to manage their large collection of images. And that represented the final pieces of the strategy. It continued to evolve and get better over the years. But this was one of the last aspects of the Digital Hub Strategy.  Because there was a new strategy underway. That’s the year Apple began the development of the iPhone. And this represents a shift in the strategy. Released in 2007, then followed up with the first iPad in 2010, we saw a shift from the growth of new products in the digital hub strategy to migrating them to the mobile platforms, making them stand-alone apps that could be sold on App Stores, integrated with iCloud, and killing off those that appealed to more specific needs in higher-end creative environments, like Aperture, which went ended in 2014, and integrating some into other products, like Color becoming a part of Final Cut Pro. But the income from those products has now been eclipsed by mobile devices. Because when we see the returns from one strategy begin to crest - you know, like when the entire creative industry loves you, it’s time to move to another, bolder strategy. And that mobile strategy opened our eyes to always online (or frequently online) synchronization between products and integration with products, like we get with Handoff and other technologies today.  In 2009 Apple acquired a company called Lala, which would later be added to iCloud - but the impact to the Digital Hub Strategy was that it paved the way for iTunes Match, a  cloud service that allowed for syncing music from a local library to other Apple devices. It was a subscription and more of a stop-gap for moving people to a subscription to license music than a lasting stand-alone product. And other acquisitions would come over time and get woven in, such as Redmatia, Beats, and Swell.  Steve Jobs said exactly what Apple was going to do in 2001. In one of the most impressive implementations of a strategy, Apple had slowly introduced quality products that tactically ushered in a digital lifestyle since the late 90s and over the next few years. iMovie, iPhoto, iTunes, iDVD, iLife, and in a sign of the changing times - iPod, iPhone, iCloud. To signal the end of that era because it was by then ubiquitous. - then came the iPad. And the professional apps won over the creative industries. Until the strategy had been played out and Apple began laying the groundwork for the next strategy in 2005.  That mobile revolution was built in part on the creative influences of Apple. Tools that came after, like Instagram, made it even easier to take great photos, connect with friends in a way iWeb couldn’t - because we got to the point where “there’s an app for that”. And as the tools weren’t needed, Apple cancelled some one-by-one, or even let Adobe Premiere eclipse Final Cut in many ways. Because you know, sales of the iMac DV were enough to warrant building the product on the Apple platform and eventually Adobe decided to do that. Apple built many of these because there was a need and there weren’t great alternatives. Once there were great alternatives, Apple let those limited quantities of software engineers go work on other things they needed done. Like building frameworks to enable a new generation of engineers to build amazing tools for the platform! I’ve always considered the release of the iPad to be the end of era where Apple was introducing more and more software. From the increased services on the server platform to tools that do anything and everything. But 2010 is just when we could notice what Jobs was doing. In fact, looking at it, we can easily see that the strategy shifted about 5 years before that. Because Apple was busy ushering in the next revolution in computing.  So think about this. Take an Apple, a Microsoft, or a Google. The developers of nearly every single operating system we use today. What changes did they put in place 5 years ago that are just coming to fruition today. While the product lifecycles are annual releases now, that doesn’t mean that when they have billions of devices out there that the strategies don’t unfold much, much slower. You see, by peering into the evolutions over the past few years, we can see where they’re taking computing in the next few years. Who did they acquire? What products will they release? What gaps does that create? How can we take those gaps and build products that get in front of them? This is where magic happens. Not when we’re too early like a General Magic was. But when we’re right on time. Unless we help set strategy upstream. Or, is it all chaos and not in the least bit predictable? Feel free to send me your thoughts! And thank you…
3/29/202124 minutes, 15 seconds
Episode Artwork

The WELL, an Early Internet Community

The Whole Earth ‘lectronic Link, or WELL, was started by Stewart Brand and Larry Brilliant in 1985, and is still available at well.com. We did an episode on Stewart Brand: Godfather of the Interwebs and he was a larger than life presence amongst many of the 1980s former hippies that were shaping our digital age. From his assistance producing The Mother Of All Demos to the Whole Earth Catalog inspiring Steve Jobs and many others to his work with Ted Nelson, there’s probably only a few degrees separating him from anyone else in computing.  Larry Brilliant is another counter-culture hero. He did work as a medical professional for the World Health Organization to eradicate smallpox and came home to teach at the University of Michigan. The University of Michigan had been working on networked conferencing since the 70s when Bob Parnes wrote CONFER, which would be used at Wayne State where Brilliant got his MD. But CONFER was a bit of a resource hog. PicoSpan was written by Marcus Watts in 1983. Pico is a small text editor in many a UNIX variant and network is network. Why small, well, modems that dialed into bulletin boards were pretty slow back then.  Marcus worked at NETI, who then bought the rights for PicoSpan to take to market. So Brilliant was the chairman of NETI at the time and approached Brand about starting up a bulletin-board system (BBS). Brilliant proposed NETI would supply the gear and software and that Brand would use his, uh, brand - and Whole Earth following, to fill the ranks. Brand’s non-profit The Point Foundation would own half and NETI would own the other half.  It became an early online community outside of academia, and an important part of the rise of the splinter-nets and a holdout to the Internet. For a time, at least.  PicoSpan gave users conferences. These were similar to PLATO Notes files, where a user could create a conversation thread and people could respond. These were (and still are) linear and threaded conversations. Rather than call them Notes like PLATO did, PicSpan referred to them as “conferences” as “online conferencing” was a common term used to describe meeting online for discussions at the time. EIES had been around going back to the 1970s, so Brand had some ideas abut what an online community could be - having used it. Given the sharp drop in the cost of storage there was something new PicoSpan could give people: the posts could last forever. Keep in mind, the Mac still didn’t ship with a hard drive in 1984. But they were on the rise.  And those bits that were preserved were manifested in words. Brand brought a simple mantra: You Own Your Own Words. This kept the hands of the organization clean and devoid of liability for what was said on The WELL - but also harkened back to an almost libertarian bent that many in technology had at the time. Part of me feels like libertarianism meant something different in that era. But that’s a digression. Whole Earth Review editor Art Kleiner flew up to Michigan to get the specifics drawn up. NETI’s investment had about a quarter million dollar cash value. Brand stayed home and came up with a name. The Whole Earth ‘lectronic Link, or WELL.  The WELL was not the best technology, even at the time. The VAX was woefully underpowered for as many users as The WELL would grow to, and other services to dial into and have discussions were springing up. But it was one of the most influential of the time. And not because they recreated the extremely influential Whole Earth catalog in digital form like Brilliant wanted, which would have been similar to what Amazon reviews are like now probably. But instead, the draw was the people.  The community was fostered first by Matthew McClure, the initial director who was a former typesetter for the Whole Earth Catalog. He’d spent 12 years on a commune called The Farm and was just getting back to society. They worked out that they needed to charge $8 a month and another couple bucks an hour to make minimal a profit.  So McClure worked with NETI to get the Fax up and they created the first conference, General. Kevin Kelly from the Whole Earth Review and Brand would start discussions and Brand mentioned The WELL in some of his writings. A few people joined, and then a few more.  Others from The Farm would join him. Cliff Figallo, known as Cliff, was user 19 and John Coate, who went by Tex, came in to run marketing. In those first few years they started to build up a base of users. It started with hackers and journalists, who got free accounts. And from there great thinkers joined up. People like Tom Mandel from Stanford Research Institute, or SRI. He would go on to become the editor of Time Online. His partner Nana. Howard Rheingold, who would go on to write a book called The Virtual Community. And they attracted more. Especially Dead Heads, who helped spread the word across the country during the heyday of the Grateful Dead.  Plenty of UNIX hackers also joined. After all, the community was finding a nexus in the Bay Area at the time. They added email in 1987 and it was one of those places you could get on at least one part of this whole new internet thing. And need help with your modem? There’s a conference for that. Need to talk about calling your birth mom who you’ve never met because you were adopted? There’s a conference for that as well. Want to talk sexuality with a minister? Yup, there’s a community for that. It was one of the first times that anyone could just reach out and talk to people. And the community that was forming also met in person from time to time at office parties, furthering the cohesion.  We take Facebook groups, Slack channels, and message boards for granted today. We can be us or make up a whole new version of us. We can be anonymous and just there to stir up conflict like on 4Chan or we can network with people in our industry like on LinkedIn. We can chat real time, which is similar to the Send option on The WELL. Or we can post threaded responses to other comments. But the social norms and trends were proving as true then as now. Communities grow, they fragment, people create problems, people come, people go. And sometimes, as we grow, we inspire.  Those early adopters of The WELL inspired Craig Newmark of Craigslist to the growing power of the Internet. And future developers of Apple. Hippies versus nerds but not really versus, but coming to terms with going from “computers are part of the military industrial complex keeping us down” philosophy to more of a free libertarian information superhighway that persisted for decades. The thought that the computer would set us free and connect the world into a new nation, as John Perry Barlow would sum up perfectly in “A Declaration of the Independence of Cyberspace”. By 1990 people like Barlow could make a post on The WELL from Wyoming and have Mitch Kapor, the founder of Lotus, makers of Lotus 1-2-3 show up at his house after reading the post - and they could join forces with the 5th employee of Sun Microsystems and GNU Debugging Cypherpunk John Gilmore to found the Electronic Foundation. And as a sign of the times that’s the same year The WELL got fully connected to the Internet. By 1991 they had grown to 5,000 subscribers. That was the year Bruce Katz bought NETI’s half of the well for $175,000. Katz had pioneered the casual shoe market, changing the name of his families shoe business to Rockport and selling it to Reebok for over $118 million.  The WELL had posted a profit a couple of times but by and large was growing slower than competitors. Although I’m not sure any o the members cared about that. It was a smaller community than many others but they could meet in person and they seemed to congeal in ways that other communities didn’t. But they would keep increasing in size over the next few years. In that time Fig replaced himself with Maurice Weitman, or Mo - who had been the first person to sign up for the service. And Tex soon left as well.  Tex would go to become an early webmaster of The Gate, the community from the San Francisco Chronicle. Fig joined AOL’s GNN and then became director of community at Salon. But AOL. You see, AOL was founded in the same year. And by 1994 AOL was up to 1.25 million subscribers with over a million logging in every day. CompuServe, Prodigy, Genie, Dephi were on the rise as well. And The WELL had thousands of posts a day by then but was losing money and not growing like the others. But I think the users of the service were just fine with that. The WELL was still growing slowly and yet for many, it was too big. Some of those left. Some stayed. Other communities, like The River, fragmented off. By then, The Point Foundation wanted out so sold their half of The WELL to Katz for $750,000 - leaving Katz as the first full owner of The WELL.  I mean, they were an influential community because of some of the members, sure, but more because the quality of the discussions. Academics, drugs, and deeply personal information. And they had always complained about figtex or whomever was in charge - you know, the counter-culture is always mad at “The Management.” But Katz was not one of them. He honestly seems to have tried to improve things - but it seems like everything he tried blew up in his face.  So Katz further alienated the members and fired Mo and brought on Maria Wilhelm, but they still weren’t hitting that hyper-growth, with membership getting up to around 10,000 - but by then AOL was jumping from 5,000,000 to 10,000,000. But again, I’ve not found anyone who felt like The WELL should have been going down that same path. The subscribers at The WELL were looking for an experience of a completely different sort. By 1995 Gail Williams allowed users to create their own topics and the unruly bunch just kinda’ ruled themselves in a way. There was staff and drama and emotions and hurt feelings and outrage and love and kindness and, well, community. By the late 90s, the buzz word at many a company were all about building communities, and there were indeed plenty of communities growing. But none like The WELL. And given that some of the founders of Salon had been users of The WELL, Salon bought The WELL in 1999 and just kinda’ let it fly under the radar. The influence continued with various journalists as members.  The web came. And the members of The WELL continued their community. Award winning but a snapshot in time in a way. Living in an increasingly secluded corner of cyberspace, a term that first began life in a present tense on The WELL, if you got it, you got it. In 2012, after trying to sell The WELL to another company, Salon finally sold The WELL to a group of members who had put together enough money to buy it. And The WELL moved into the current, more modern form of existence. To quote the site: Welcome to a gathering that’s like no other. The WELL, launched back in 1985 as the Whole Earth ‘Lectronic Link, continues to provide a cherished watering hole for articulate and playful thinkers from all walks of life. For more about why conversation is so treasured on The WELL, and why members of the community banded together to buy the site in 2012, check out the story of The WELL. If you like what you see, join us! It sounds pretty inviting. And it’s member supported. Like National Public Radio kinda’. In what seems like an antiquated business model, it’s $15 per month to access the community. And make no mistake, it’s a community.  You Own Your Own Words. If you pay to access a community, you don’t sign the ownership of your words away in a EULA. You don’t sign away rights to sell your data to advertisers along with having ads shown to you in increasing numbers in a hunt for ever more revenue. You own more than your words, you own your experience. You are sovereign.  This episode doesn’t really have a lot of depth to it. Just as most online forums lack the kind of depth that could be found on the WELL. I am a child of a different generation, I suppose. Through researching each episode of the podcast, I often read books, conduct interviews (a special thanks to Help A Reporter Out), lurk in conferences, and try to think about the connections, the evolution, and what the most important aspects of each are. There is a great little book from Katie Hafner called The Well: A Story Of Love, Death, & Real Life. I recommend it. There’s also Howard Rheingold’s The Virtual Community and John Seabrook’s Deeper: Adventures on the Net. Oh, and From Counterculture to Cyberculture: Stewart Brand, the Whole Earth Network, And the Rise of Digital Utopianism from Fred Turner and Siberia by Douglas Rushkoff. At a minimum, I recommend reading Katie Hafner’s wired article and then her most excellent book! Oh, and to hear about other ways the 60s Counterculture helped to shape the burgeoning technology industry, check out What the Dormouse Said by John Markoff.  And The WELL comes up in nearly every book as one of the early commercial digital communities. It’s been written about in Wired, in The Atlantic, makes appearances in books like Broad Band by Claire Evans, and The Internet A Historical Encyclopedia.  The business models out there to build and run  and grow a company have seemingly been reduced to a select few. Practically every online community has become free with advertising and data being the currency we parlay in exchange for a sense of engagement with others.  As network effects set in and billionaires are created, others own our words. They think the lifestyle business is quaint - that if you aren’t outgrowing a market segment that you are shrinking. And a subscription site that charges a monthly access fee to cgi code with a user experience that predates the UX field on the outside might affirm that philosophy -especially since anyone can see your real name. But if we look deeper we see a far greater truth: that these barriers keep a small corner of cyberspace special - free from Russian troll farms and election stealing and spam bots. And without those distractions we find true engagement. We find real connections that go past the surface. We find depth. It’s not lost after all.  Thank you for being part of this little community. We are so lucky to have you. Have a great day.
3/12/202119 minutes, 9 seconds
Episode Artwork

Tesla: From Startup To... Startup...

Tesla   Most early stage startups have, and so seemingly need, heroic efforts from brilliant innovators working long hours to accomplish impossible goals. Tesla certainly had plenty of these as an early stage startup and continues to - as do the other Elon Musk startups. He seems to truly understand and embrace that early stage startup world and those around him seem to as well.   As a company grows we have to trade those sprints of heroic output for steady streams of ideas and quality. We have to put development on an assembly line. Toyota famously put the ideas of Deming and other post-World War II process experts into their production lines and reaped big rewards - becoming the top car manufacturer in the process.    Not since the Ford Model T birthed the assembly line had auto makers seen as large an increase in productivity. And make no mistake, technology innovation is about productivity increases. We forget this sometimes when young, innovative startups come along claiming to disrupt industries. Many of those do, backed by seemingly endless amounts of cash to get them to the next level in growth. And the story of Tesla is as much about productivity in production as it is about innovative and disruptive ideas. And the story is as much about a cult of personality as it is about massive valuations and quality manufacturing.    The reason we’re covering Tesla in a podcast about the history of computers is at the heart of it, it’s a story about the startup culture clashing head-on with decades-old know-how in an established industry. This happens with nearly every new company: there are new ideas, an organization is formed to support the new ideas, and as the organization grows, the innovators are forced to come to terms with the fact that they have greatly oversimplified the world.  Tesla realized this. Just as Paypal had realized it before. But it took a long time to get there. The journey began much further back. Rather than start with the discovery of the battery or the electric motor, let’s start with the GM Impact. It was initially shown off at the 1990 LA Auto Show. It’s important because Alan Cocconi was able to help take some of what GM learned from the 1987 World Solar Challenge race using the Sunraycer and start putting it into a car that they could roll off the assembly lines in the thousands.  They needed to do this because the California Air Resources Board, or CARB, was about to require fleets to go 2% zero-emission, or powered by something other than fossil fuels, by 1998 with rates increasing every few years after that. And suddenly there was a rush to develop electric vehicles. GM may have decided that the Impact, later called the EV1, proved that the electric car just wasn’t ready for prime time, but the R&D was accelerating faster than it ever had before then.  That was the same year that NuvoMedia was purchased by Gemstar-TVGuide International for $187 million. They’d made the Rocket eBook e-reader. That’s important because the co-founders of that company were Martin Eberhard, a University of Illinois Champaign Urbana grad, and Marc Tarpenning. Alan Cocconi was able to take what he’d learned and form a new company, called AC Propulsion. He was able to put together a talented group and they built a couple of different cars, including the tZero. Many of the ideas that went into the first Tesla car came from the tZero, and Eberhard and Tarpenning tried to get Tom Gage and Cocconi to take their tZero into production. The tZero was a sleek sportscar that began life powered by lead-acid batteries that could get from zero to 60 in just over four seconds and run for 80-100 miles. They used similar regenerative braking that can be found in the Prius (to oversimplify it) and the car took about an hour to charge. The cars were made by hand and cost about $80,000 each. They had other projects so couldn’t focus on trying to mass produce the car. As Tesla would learn later, that takes a long time, focus, and a quality manufacturing process.  While we think of Elon Musk as synonymous with Tesla Motors, it didn’t start that way. Tesla Motors was started in 2003 by Eberhard, who would serve as Tesla’s first chief executive officer (CEO) and Tarpenning, who would become the first chief financial officer (CFO), when AC Propulsion declined to take that tZero to market. Funding for the company was obtained from Elon Musk and others, but they weren’t that involved at first. Other than the instigation and support. It was a small shop, with a mission - to develop an electric car that could be mass produced.  The good folks at AC Propulsion gave Eberhard and Tarpenning test drives in the tZero, and even agreed to license their EV Power System and reductive charging patents. And so Tesla would develop a motor and work on their own power train so as not to rely on the patents from AC Propulsion over time. But the opening Eberhard saw was in those batteries. The idea was to power a car with battery packs made of lithium ion cells, similar to those used in laptops and of course the Rocket eBooks that NuvoMedia had made before they sold the company. They would need funding though. So Gage was kind enough to put them in touch with a guy who’d just made a boatload of money and had also recommended commercializing the car - Elon Musk.  This guy Musk, he’d started a space company in 2002. Not many people do that. And they’d been trying to buy ICBMs in Russia and recruiting rocket scientists. Wild. But hey, everyone used PayPal, where he’d made his money. So cool. Especially since Eberhard and Tarpenning had their own successful exit. Musk signed on to provide $6.5 million in the Tesla Series A and they brought in another $1m to bring it to $7.5 million. Musk became the chairman of the board and they expanded to include Ian Wright during the fundraising and J.B. Straubel in 2004. Those five are considered the founding team of Tesla.  They got to work building up a team to build a high-end electric sports car. Why? Because that’s one part of the Secret Tesla Motors Master Plan. That’s the title of a blog post Musk wrote in 2006.  You see, they were going to build a high-end hundred thousand dollar plus car. But the goal was to develop mass market electric vehicles that anyone could afford. They unveiled the prototype in 2006, selling out the first hundred in three weeks. Meanwhile, Elon Musk’s cousins, Peter and Lyndon Rive started a company called SolarCity in 2006, which Musk also funded. They merged with Tesla in 2016 to provide solar roofs and other solar options for Tesla cars and charging stations. SolarCity, as with Tesla, was able to capitalize on government subsidies and growing to become the third most solar installations in homes with just a little over 6 percent of the market share.  But we’re still in 2006. You see, they won a bunch of awards, got a lot of attention - now it was time to switch to general production. They worked with Lotus, a maker of beautiful cars that make up for issues with quality production in status, beauty, and luxury. They started with the Lotus Elise, increased the wheelbase and bolstered the chassis so it could hold the weight of the batteries. And they used a carbon fiber composite for the body to bring the weight back down.  The process was slower than it seems anyone thought it would be. Everyone was working long hours, and they were burning through cash. By 2007, Eberhard stepped down as CEO. Michael Marks came in to run the company and later that year Ze’ev Drori was made CEO - he has been given the credit by many for tighting things up so they could get to the point that they could ship the Roadster. Tarpenning left in 2008. As did others, but the brain drain didn’t seem all that bad as they were able to ship their first car in 2008, after ten engineering prototypes. The Roadster finally shipped in 2008, with the first car going to Musk. It could go for 245 miles a charge. 0 to 60 in less than 4 seconds. A sleek design language. But it was over $100,000. They were in inspiration and there was a buzz everywhere. The showmanship of Musk paired with the beautiful cars and the elites that bought them drew a lot of attention. As did the $1 million in revenue profit they earned in July of 2009, off 109 cars shipped.  But again, burning through cash. They sold 10% of the company to Daimler AG and took a $465 million loan from the US Department of Energy. They were now almost too big to fail.  They hit 1,000 cars sold in early 2010. They opened up to orders in Canada. They were growing. But they were still burning through cash. It was time to raise some serious capital. So Elon Musk took over as CEO, cut a quarter of the staff, and Tesla filed for an IPO in 2010, raising over $200 million. But there was something special in that S-1 (as there often is when a company opens the books to go public): They would cease production of the Roadster making way for the next big product. Tesla cancelled the Roadster in 2012. By then they’d sold just shy of 2,500 Roadsters and been thinking through and developing the next thing, which they’d shown a prototype of in 2011. The Model S started at $76,000 and went into production in 2012. It could go 300 miles, was a beautiful car, came with a flashy tablet-inspired 17 inch display screen on the inside to replace buttons. It was like driving an iPad. Every time I’ve seen another GPS since using the one in a Model S, I feel like I’ve gotten in a time machine and gone back a decade.  But it had been announced in 2007to ship in 2009. And then the ship date dropped back to 2011 and 2012. Let’s call that optimism and scope creep. But Tesla has always eventually gotten there. Even if the price goes up. Such is the lifecycle of all technology. More features, more cost. There are multiple embedded Ubuntu operating systems controlling various parts of car, connected on a network in the car. It’s a modern marvel and Tesla was rewarded with tons of awards and, well, sales. Charging a car that runs on batteries is a thing. So Tesla released the Superchargers in 2012, shipping 7 that year and growing slowly until now shipping over 2,500 per quarter. Musk took some hits because it took longer than anticipated to ship them, then to increase production, then to add solar. But at this point, many are solar and I keep seeing panels popping up above the cars to provide shade and offset other forms of powering the chargers. The more ubiquitous chargers become, the more accepting people will be of the cars. Tesla needed to produce products faster. The Nevada Gigafactory was begun in 2013, to mass produce battery packs and components. Here’s one of the many reason for the high-flying valuation Tesla enjoys: it would take dozens if not a hundred factories like this to transition to sustanable energy sources. But it started with a co-investment between Tesla and Panasonic, with the two dumping billions into building a truly modern factory that’s now pumping out close tot he goal set back in 2014. As need increased, Gigafactories started to crop up with Gigafactory 5 being built to supposedly go into production in 2021 to build the Semi, Cybertruck (which should begin production in 2021) and Model Y. Musk first mentioned the truck in 2012 and projected a 2018 or 2019 start time for production. Close enough.  Another aspect of all that software is that they can get updates over the air. Tesla released Autopilot in 2014. Similar to other attempts to slowly push towards self-driving cars, Autopilot requires the driver to stay alert, but can take on a lot of the driving - staying within the lines on the freeway, parking itself, traffic-aware cruise control, and navigation. But it’s still the early days for self-driving cars and while we make think that because the number of integrated circuits doubles every year that it paves the way to pretty much anything, no machine learning project I’ve ever seen has gone as fast as we want because it takes years to build the appropriate algorithms and then rethink industries based on the impact of those. But Tesla, Google through Waymo, and  many others have been working on it for a long time (hundreds of years in startup-land) and it continues to evolve. By 2015, Tesla had sold over 100,000 cars in the life of the company. They released the Model X that year, also in 2015. This was their first chance to harness the power of the platform - which in the auto industry is when there are multiple cars of similar size and build. Franz von Holzhausen designed it and it is a beautiful car, with falcon-wing doors, up to a 370 mile range on the battery and again with the Autopilot. But harnessing the power of the platform was a challenge. You see, with a platform of cars you want most of the parts to be shared - the differences are often mostly cosmetic. But the Model X only shared a little less than a third of the parts of the Model S.  But it’s yet another technological marvel, with All Wheel Drive as an option, that beautiful screen, and check this out - a towing capacity of 5,000 pounds - for an electric automobile! By the end of 2016, they’d sold over 25,000. To a larger automaker that might seem like nothing, but they’d sell over 10,000 in every quarter after that. And it would also become the platform for a mini-bus. Because why not. So they’d gone lateral in the secret plan but it was time to get back at it. This is where the Model 3 comes in.  The Model 3 was released in 2017 and is now the best-selling electric car in the history of the electric car. The Model 3 was first shown off in 2016 and within a week, Tesla had taken over 300,000 reservations. Everyone I talked to seemed to want in on an electric car that came in at $35,000. This was the secret plan. That $35,000 model wouldn’t be available until 2019 but they started cranking them out. Production was a challenge with Musk famously claiming Tesla was in “Production Hell” and sleeping on an air mattress at the factory to oversee the many bottlenecks that came. Musk thought they could introduce more robotics than they could and so they’ slowly increased production to first a few hundred per week then a few thousand until finally almost hitting that half a million mark in 2020. This required buying Grohmann Engineering in 2017, now called Tesla Advanced Automation Germany - pumping billions into production. But Tesla added the Model Y in 2020, launching a crossover on the Model 3 platform, producing over 450,000 of them. And then of course they decided to the Tesla Semi, selling for between $150,000 and $200,000. And what’s better than a Supercharger to charge those things? A Megacharger. As is often the case with ambitious projects at Tesla, it didn’t ship in 2020 as projected but is now supposed to ship, um, later. Tesla also changed their name from Tesla Motors to Tesla, Inc. And if you check out their website today, solar roofs and solar panels share the top bar with the Models S, 3, X, and Y. SolarCity and batteries, right? Big money brings big attention. Some good. Some bad. Some warranted. Some not. Musk’s online and sometimes nerd-rockstar persona was one of the most valuable assets at Tesla - at least in the fundraising, stock pumping popularity contest that is the startup world. But on August 7, 2018, he tweeted “Am considering taking Tesla private at $420. Funding secured.” The SEC would sue him for that, causing him to step down as chairman for a time and limit his Twitter account. But hey, the stock jumped up for a bit.  But Tesla kept keeping on, slowly improving things and finally hit about the half million cars per year mark in 2020. Producing cars has been about quality for a long time. And it needs to be with people zipping around as fast as we drive - especially on modern freeways. Small batches of cars are fairly straight-forward. Although I could never build one.  The electric car is good for the environment, but the cost to offset carbon for Tesla is still far greater than, I don’t know, making a home more energy efficient. But the improvements in the technology continue to increase rapidly with all this money and focus being put on them. And the innovative designs that Tesla has deployed has inspired others, which often coincides with the rethinking of entire industries.  But there are tons of other reasons to want electric cars. The average automobile manufactured these days has about 30,000 parts. Teslas have less than a third of that. One hopes that will some day be seen in faster and higher quality production.  They managed to go from producing just over 18,000 cars in 2015 to over 26,000 in 2016 to over 50,000 in 2017 to the 190,000s in 2018 and 2019 to a whopping 293,000 in 2020. But they sold nearly 500,000 cars in 2020 and seem to be growing at a fantastic clip. Here’s the thing, though. Ford exceeded half a million cars in 1916. It took Henry Ford from 1901 to 1911 to get to producing 34,000 cars a year but only 5 more years to hit half a million. I read a lot of good and a lot of bad things about Tesla. Ford currently has a little over a 46 and a half billion dollar market cap. Tesla’s crested at nearly $850 billion and has since dropped to just shy of 600. Around 64 million cars are sold each year. Volkswagen is the top, followed by Toyota. Combined, they are worth less than Tesla on paper despite selling over 20 times the number of cars. If Tesla was moving faster, that might make more sense. But here’s the thing. Tesla is about to get besieged by competitors at every side. Nearly every category of car has an electric alternative with Audi, BMW, Volvo, and Mercedes releasing cars at the higher ends and on multiple platforms. Other manufacturers are releasing cars to compete with the upper and lower tiers of each model Tesla has made available. And miniature cars, scooters, bikes, air taxis, and other modes of transportation are causing us to rethink the car. And multi-tenancy of automobiles using ride sharing apps and the potential that self driving cars can have on that are causing us to rethink automobile ownership.  All of this will lead some to rethink that valuation Tesla enjoyed. But watching the moves Tesla makes and scratching my head over some certainly makes me think to never under, or over-estimate Tesla or Musk. I don’t want anything to do with Tesla Stock. Far too weird for me to grok. But I do wish them the best. I highly doubt the state of electric vehicles and the coming generational shifts in transportation in general would be where they are today if Tesla hadn’t done all the good and bad that they’ve done. They deserve a place in the history books when we start looking back at the massive shifts to come. In the meantime, I’l’ just call this episode part 1 and wait to see if Tesla matches Ford production levels some day, crashes and burns, gets acquired by another company, or who knows, packs up and heads to Mars. 
3/9/202129 minutes, 19 seconds
Episode Artwork

PayPal Was Just The Beginning

We can look around at distributed banking, crypto-currencies, Special Purpose Acquisition Companies, and so many other innovative business strategies as new and exciting and innovative. And they are. But paving the way for them was simplifying online payments to what I’ve heard Elon Musk call just some rows in a database.  Peter Thiel, Max Levchin, and former Netscaper Luke Nosek had this idea in 1998. Levchin and Nosek has worked together on a startup called SponsorNet New Media while at the University of Illinois Champagne-Urbana where PLATO and Mosaic had come out of. And SponsorNet was supposed to sell online banner ads but would instead be one of four failed startups before zeroing in on this new thing, where they would enable digital payments for businesses and make it simple for consumers to buy things online. They called the company Confinity and setup shop in beautiful Mountain View, California. It was an era when a number of organizations were doing things in taking payments online that weren’t so great. Companies would cache credit card numbers on sites, many had weak security, and the rush to sell everything  in the bubble forming around dot-coms fueled a knack for speed over security, privacy, or even reliability.  Confinity would store the private information in its own banking vaults, keep it secure, and provide access to vendors - taking a small charge per-transaction. Where large companies had been able to build systems to take online payments, now small businesses and emerging online stores could compete with the big boys. Thiel and Levchin had hit on something when they launched a service called PayPal, to provide a digital wallet and enable online transactions. They even accepted venture funding, taking $3 million from banks like Deutsche Bank over Palm Pilots. One of those funders was Nokia, investing in PayPal expanding into digital services for the growing mobile commerce market. And by 2000 they were up to 1,000,000 users.  They saw an opening to make a purchase from a browser on a phone or a browser or app on a cell phone using one of those new smart phone ideas. And they were all rewarded with over 10 million people using the site in just three short years, processing a whopping $3 billion in transactions.  Now this was the heart of the dot-com bubble. In that time, Elon Musk managed to sell his early startup Zip2, which made city guides on the early internet, to Compaq for around $300 million, pocketing $22 million for himself. He parlayed that payday into X.com, another online payment company. X.com exploded to over 200,000 customers quickly and as happens frequently with rapid acceleration, a young Musk found himself with a new boss - Bill Harris, the former CEO of Intuit.  And they helped invent many of the ways we do business online at that time. One of my favorite of Levchin’s contributions to computing, the Gausebeck-Levchin test, is one of the earliest implementations of what we now call CAPTCHA - you know when you’re shown a series of letters and asked to type them in to eliminate bots.  Harris helped the investors de-risk by merging with Confinity to form X.com. Peter Thiel and Elon Musk are larger than life minds in Silicon Valley. The two were substantially different. Musk took on the CEO role but Musk and Thiel were at heads. Thiel believed in a Linux ecosystem and Musk believed in a Windows ecosystem. Thiel wanted to focus on money transfers, similar to the PayPal of today. Given that those were just rows in a database, it was natural that that kind of business would become a red ocean and indeed today there are dozens of organizations focused on it. But Paypal remains the largest. So Musk also wanted to become a full online banking system - much more ambitious. Ultimately Thiel won and assumed the title of CEO.  They remained a money transmitter and not a full bank. This means they keep funds that have been sent and not picked up, in an interest bearing account at a bank.  They renamed the company to PayPal in 2001 and focused on taking the company public, with an IPO as PYPL in 2002. The stock shot up 50% in the first day of trading, closing at $20 per share. Yet another example of the survivors of the dot com bubble increasing the magnitude of valuations. By then, most eBay transactions accepted PayPal and seeing an opportunity, eBay acquired PayPal for $1.5 billion later in 2002. Suddenly PayPal was the default option for closed auctions and would continue their meteoric rise. Musk is widely reported to have made almost $200 million when eBay bought PayPal and Thiel is reported to have made over $50 million.  Under eBay, PayPal would grow and as with most companies that IPO, see a red ocean form in their space. But they brought in people like Ken Howery, who serve as the VP of corporate development, would later cofound investment firm Founders Fund with Thiel, and then become the US Ambassador to Sweden under Trump. And he’s the first of what’s called the PayPal Mafia, a couple dozen extremely influential personalities in tech.  By 2003, PayPal had become the largest payment processor for gambling websites. Yet they walked away from that business to avoid some of the complicated regulations until various countries that could verify a license for online gambling venues.  In 2006 they added security keys and moved to sending codes to phones for a second factor of security validation. In 2008 they bought Fraud Sciences to gain access to better online risk management tools and Bill Me Later. As the company grew, they setup a company in the UK and began doing business internationally. They moved their EU presence to Luxembourg 2007. They’ve often found themselves embroiled in politics, blocking the any political financing accounts, Alex Jones show InfoWars, and one of the more challenging for them, WikiLeaks in 2010. This led to them being attacked by members of Anonymous for a series of denial of service attacks that brought the PayPal site down. OK, so that early CAPTCHA was just one way PayPal was keeping us secure. It turns out that moving money is complicated, even the $3 you paid for that special Golden Girls t-shirt you bought for a steal on eBay. For example, US States require reporting certain transactions, some countries require actual government approval to move money internationally, some require a data center in the country, like Turkey. So on a case-by-case basis PayPal has had to decide if it’s worth it to increase the complexity of the code and spend precious development cycles to support a given country. In some cases, they can step in and, for example, connect the Baidu wallet to PayPal merchants in support of connecting China to PayPal.  They were spun back out of eBay in 2014 and acquired Xoom for $1 billion in 2015, iZettle for $2.2 billion, who also does point of sales systems. And surprisingly they bought online coupon aggregator Honey for $4B in 2019. But their best acquisition to many would be tiny app payment processor Venmo for $26 million. I say this because a friend claimed they prefer that to PayPal because they like the “little guy.” Out of nowhere, just a little more than 20 years ago, the founders of PayPal and they and a number of their initial employees willed a now Fortune 500 company into existence. While they were growing, they had to learn about and understand so many capital markets and regulations. This sometimes showed them how they could better invest money. And many of those early employees went on to have substantial impacts in technology. That brain drain helped fuel the Web 2.0 companies that rose.  One of the most substantial ways was with the investment activities. Thiel would go on to put $10 million of his money into Clarium Capital Management, a hedge fund, and Palantir, a big data AI company with a focus on the intelligence industry, who now has a $45 billion market cap. And he funded another organization who doesn’t at all use our big private data for anything, called Facebook. He put half a million into Facebook as an angel investor - an investment that has paid back billions. He’s also launched the Founders Fund, Valar Venture, and is a partner at Y Combinator, in capacities where he’s funded everyone from LinkedIn and Airbnb to Stripe to Yelp to Spotify, to SpaceX to Asana and the list g