How Does the Drama at OpenAI Affect You?

November 20, 2023

In the late 1980s and early 1990s, Microsoft's operating systems stood unchallenged, wrapped in a mystique of secrecy. The company’s 'Fear, Uncertainty, and Doubt' marketing strategy played a crucial role, implying that Windows had exclusive, magical capabilities necessary for computing, casting doubt on alternatives. Alternatives barely even existed because Microsoft convinced most people that Windows was the only viable option. They assured us that the code was closed and secret for our own protection: Hackers couldn't figure out how to hack it if they couldn't see how it worked. But we know now that security doesn't work that way, and it was really for their protection, not ours.

A revelation came from a college student half a world away who introduced Linux after reading the classic university textbook on operating systems. This open-source operating system peeled back the layers of secrecy. Anyone could read the Linux source code to see how it worked, challenging the perception that Windows was doing something magical under the hood. The openness revealed that Microsoft kept its operating system closed to maintain fear, uncertainty, and doubt rather than to preserve trade secrets.

Microsoft felt profound effects from losing its aura: The startup community embraced Linux, which became the basis for the cloud computing revolution. Microsoft is still trying to catch up—even their own cloud computing service runs more Linux virtual computers than Windows.

OpenAI's illusory mystique

Like Microsoft in the early days of personal computing, OpenAI initially led the way in generative AI with releases like ChatGPT and GPT-4, quickly becoming a dominant force. Part of their allure was their unique governance model, where a non-profit ostensibly guided their path, lending an aura of altruism and innovation. This mystique encouraged customers to accept a degree of vendor lock-in, bypassing the AI applications industry's best-practice recommendation of thoroughly evaluating AI options before committing.

And, like Microsoft, OpenAI found self-serving justifications to reverse their original "Open AI" mission and keep their work closed and secret: They were saving the world with their "capped profit" structure, and revealing their secret sauce to the world would endanger humanity.

However, the recent upheaval vaporized this mystique within one bizarre hour. As Microsoft's market value tumbled by about fifty billion dollars thanks to the OpenAI board's inept timing in announcing Sam Altman's firing before markets closed, a problem became clear: The dramatic leadership changes and board decisions have exposed their unique governance structure as a liability rather than a strength, introducing instability and unpredictability. If business continuity is not part of their mission then they don't make good business partners.

Mira Murati's employee letter to OpenAI's board included a jarring accusation: "You also informed the leadership team that allowing the company to be destroyed 'would be consistent with the mission.'" This stark statement reveals a willingness by the board to potentially sacrifice the company's existence in pursuit of their broader mission, raising serious doubts about OpenAI's commitment to stability and ongoing service reliability for its users.

For OpenAI customers like Anthus, this situation creates profound uncertainty. OpenAI's mission doesn't align with the interests of any business. The board's willingness to jeopardize the future of the company's employees, partners and customers cast doubt on the long-term viability of solutions dependent on the OpenAI API. The realization that OpenAI's mission might supersede the sustainability of its services forces a re-evaluation of our reliance on their technologies.

The 'no moats' revelation

The leaked memo We Have No Moat, And Neither Does OpenAI from an anonymous Google researcher insightfully predicted the volatility and shifting dominance in the AI industry, emphasizing the rapid advancements in the open-source sector. This scenario mirrors the earlier computing era when Linus Torvalds' Linux challenged Microsoft's dominance by demystifying operating systems, akin to how open-source AI is currently reshaping generative AI.

Ironically, Microsoft, once the victim of mystique vaporization, now stood to benefit from the dissolution of OpenAI's aura. The pieces and players shift in this game, making today's underdog tomorrow's contender.

Implications for the future of AI

The Linux story teaches us that transparency, democratization, and community-driven development can lead to robust, versatile technologies. Applying these principles to AI promises a future where the field is collaborative, evolving, and accessible to all. This open ecosystem could lead to more secure, diverse, and innovative AI applications, like how Linux transformed the software industry.

How should we respond?

The OpenAI debacle was a wake-up call for everyone who depends on AI to reassess their strategies, not just who to trust.

As an individual

Our emphasis on prioritizing solutions over tools means continuously exploring and understanding a broad range of AI technologies and approaches. It's about being agile and adaptable, ready to adopt the most effective solutions as the AI landscape evolves.

As an organization

Organizations must foster a culture of flexibility and innovation, adopting AI governance policies emphasizing the importance of evaluating alternative AI solutions regularly and continuously. This approach ensures that your organization remains adaptable and ready to embrace the constantly evolving AI landscape.

Our plan: Commodify AI services

Our initial experience with generative AI via OpenAI was exhilarating. Quickly, their API services became our go-to solution. However, this led us to inadvertently create a dependency, neglecting the importance of constantly evaluating alternatives.

We're a lot more aware now of whether specific tasks require an AI API service or if we could do it better with open-source models and tools like HuggingFace libraries. If we want to deliver an affordable, secure, and dependable system for business automation, like spotting GDPR forget-me requests in emails, then we could use the OpenAI API. It would be easy, and it would work. But why would we want to pay to send thousands of requests to any third party if we could do it ourselves? Much less a third party like OpenAI that seems conflicted about its own values?

The OpenAI debacle has solidified one of our core values: to commodify AI models and services, treating them as interchangeable tools rather than proprietary solutions. This shift towards a more independent and versatile approach is not merely about cost-savings or information security: It's about staying ahead in a quickly-changing game, where choosing sides among the giants is not the path to stability. In a world with no moats, don't get too invested in any given castle.

What's Your Plan?

What's your next move in this ever-changing game of AI?