The Future of Software
Custom, on demand, built instantly by AI, with no development skills necessary.
In the future, software will be written entirely by AI. It will be built on demand, customized for every use-case, and accessible to everyone, with no coding skills necessary. I slowly became convinced of this while building out the software and development team at my first startup, Canvas, founded in 2015 to build vision-based autonomous mobile robots and acquired by Amazon in 2019.
The software powering the neurosymbolic AI driving our robots was complex and difficult to write. The robots used cameras as their primary means of observing the world, building world models, and navigating challenging dynamic environments where there were few explicit rules. Although our software was expensive to write and maintain, this wasn’t a problem exclusive to us. I saw it repeatedly at Amazon after Canvas was acquired, and at many startups as a venture partner at Xplorer Capital: the pace of software development and maintenance was one of the primary bottlenecks for growth.
Software today is built on relatively ancient paradigms. The average age of the top 3 most used languages on Github is 29.6 years. And yet, software products continue to become more complex, demanding to maintain, and requiring frequent security and feature updates. So why are we still using 30-year-old programming languages? And why isn’t there anything better? The only answer I found convincing is that to do better, software must be written entirely by AI, and to succeed the AI needs to understand human intent, and in turn, humans must trust its output.
Moving beyond code in this way will be a major transformation for software development, but not one without parallels. In the 1950s, punch-cards were the primary interface to computers. To program them, voluminous decks of hole-punched cards representing program operations had to be meticulously and painstakingly punched and prepared. Fortran, released in 1957, changed that forever. Even though the medium through which the computer understood its instructions didn’t change, the human interface did. Humans could now use a new abstraction: programming languages. The compiler, in turn, did the work of translating code down to machine instructions (the digital equivalent of the punch-cards). This shift significantly improved the productivity of programmers. Although making a similar leap from code to natural language is technically much more challenging, the results will be similar: building software will become significantly easier and more productive. The question, then, is how to make the leap.
I couldn’t sleep on this problem. After some years of technical exploration and convincing old and new friends to join, Durable was founded in the summer of 2022 to solve it. Our name is inspired by the vision of a future where reliable software is written entirely from natural language specifications, deployed and maintained automatically and indefinitely, with zero coding skills required of users. The implications of achieving this are obvious. Access to custom software will no longer be limited to those with development skills. Software will no longer be mass-manufactured for greatest-common-denominator use cases. It will be custom to every individual and every use case. The only requirement will be having an idea.
If we are to move beyond code, then both the specification and verification of software must take place in natural language. Not only must the generated software be 100% one-click, deploy-ready by construction, but users will also need to trust that the AI has understood their intentions, and that the software meets their specifications, all without interfacing with the code.
Today, large language models (LLMs) are the standard method for generating code from natural language inputs. They are a type of auto-regressive deep-learning model based on the Transformer architecture first described in a seminal 2017 paper. LLMs, now popularized in the mainstream by ChatGPT, are astonishingly good at producing coherent text, poetry, and generating snippets of code from natural language descriptions. But if we use LLMs to move beyond code, they must ask clarifying questions where appropriate, state their assumptions wherever they are made, and generate code that is not only functional and deployable, but also code that is verifiable by users. In short, they must generate not just code, but software. These are all capabilities that aren’t naturally aligned with what LLMs are designed to do.
Because of this, we’ve taken a different approach to the status quo to build Durable. It combines the impressive capabilities of custom LLMs for dealing with language, with dedicated reasoning (planning) and world-modeling capabilities in the joint domain of code and language. Our neurosymbolic approach is complemented by a team with deep experience with both LLMs and symbolic AI, and an appreciation of each approach’s strengths and weaknesses. Realizing Durable’s vision is difficult and will take time, but there are concrete steps along the way -- steps which are feasible in the near term and which have significant commercial potential in the right applications. I’m excited to again be building something that feels impactful with a fantastic team. I’ll be sharing regularly on this blog as we progress.