The interface Is the message (when you know the rules)

What Chess streamed on Twitch, AI, and seed-stage startup investments have in common

Chess grandmaster Hikaru Nakamura and chess fan Shahar Tzafrir at a Paris chess event. 2016.
Super Grandmaster Hikaru Nakamura, and chess fan S.T. at a Paris chess event. 2016

A couple of days ago, the New-York Times published an article about the soaring popularity of chess on Twitch, giving Grandmaster Hikaru Nakamura due credit.

Of all things, this made me revisit past thoughts on how chess has helped me understand the critical importance of the interface between AI and knowledge-workers, and how chess has led me to make multiple seed-stage investments in AI-driven startups.

First, what’s Twitch? It’s an online streaming platform created for e-sport players to stream as they play. Amazon acquired Twitch in 2014 (after an outstanding Bessemer investment two years earlier) for $970M in cash. Twitch has since soared to become the 31st most visited global website according to Alexa and passed $1.5B in annual revenue.

The most popular personas on Twitch are players competing in multiplayer e-sport games, with tens (sometimes hundreds) of thousands of live viewers watching their every move. Twitch’s current record is 6.1 million people watching live the announcement of Playstation 5 (June 2020) and 2.3 million people watching live an e-game event (Fortnite, of course). The five most popular players on Twitch jointly have more than 45,000,000 followers. Leading Twitch streamers report they make north of $100K a month — by playing games.

One thing you could hardly find on Twitch until recently was chess players!

And then came Hikaru.

Hikaru Nakamura is a 32 years old chess prodigy who became a chess grandmaster at the age of 15. For context on how hard it is to become a chess grandmaster: there have been fewer than 2,000 chess grandmaster titles ever awarded. Professional chess players are rated in a manner used in other sports, via ratings. Only 118 chess players have ever managed to cross the 2,700 chess rating threshold, gaining the unofficial title of Super-Grandmasters. Fewer still, only 14 people in history have managed to cross the 2,800 chess rating threshold.

Hikaru Nakamura is on this truly short (and distinguished) list of 14 chess players who crossed the 2,800 chess rating. He is also a five-time U.S. chess champion, and the current world number one rated Blitz player (3 minutes per game).

Hikaru started showing up on Twitch as the Coronavirus led to cancellations of in-person chess events. It turns out his skills of reaching top echelons were not limited to chess alone. Last month, Hikaru had more than 500,000 followers and more than 25,000,000 views of his chess content on Twitch, leading him to be the #1 most-watched player in North America. His focus is on chess being fun. He is friendly, plays beginners, and provides positive advice. He entertains his viewers with feats of mind that are hard to comprehend: playing with his eyes closed (with fast time controls) or solving timed chess puzzles — something you’ve got to see to believe possible. Kudos to Hikaru!

Watch: https://www.youtube.com/watch?v=fMrlyZdUNLQ

I started this post by saying how this reminded me of a key insight I gained from chess.

Chess was one of the first playing grounds for AI scientists. It turns out, many AI scientists happened to be chess fans, and chess proved a fertile battlefield to evaluate their early advances in AI. *more on why chess is an easy problem vs. real AI, at the end.

The chess community, professional players and amateurs alike, quickly adopted chess AI. Even when that AI was still relatively weak (long before the Kasparov vs. DeepBlue matches of 1995 and 1996). I know, because I was part of that scene at those times.

I venture to claim that chess players (are they not the ultimate knowledge-workers?) were one of the first to augment their profession with AI, amongst all occupations.

Why? Two reasons. A) chess players love to win. They’ll take any advantage available; B) more profoundly, the interface was there. By interface, I mean the person/machine interface that easily allowed chess players to benefit from an AI.

A chess AI from 20 years ago (and actually of today still) would look at a chess position and display a ‘score,’ which is the evaluation it gave to the current board position. It would also present the most likely upcoming sequence of moves, in a notation that chess players were already using.

Instead of a player debating a position with themselves or with another player, chess AI used the exact language, process, and interface that chess players were already using to provide an immediate and better answer.

Chess AI quickly became stronger than most (and then all) human players. Technically speaking, it started by using a banal brute force method called minimax that required significant computing resources and very little smarts. However, this approach proved enough to outplay all human players.

AI research continued evolving, reaching today’s most cutting edge science: a self-learning, unsupervised rewards-based reinforced neural-network (yes, these are five separate hyperlinks). Most notably exemplified by Deepmind’s Alphazero, the open-source LC0 attempting to recreate it, and Stockfish that amalgamates the traditional MiniMax Alpha-Beta pruning search tree algorithm (only four hyperlinks here, it’s easy to see which one is harder), with a Neural-Network based evaluation.

While chess AI was making massive progress increasing its playing strength (the race did not stop after it became super-human strong, researchers always aspired for more). The one thing that did not change was the interface chess players used to interact with their chess-playing AI.

That allowed chess players to remain utterly oblivious to how their AI was doing its magic. They could benefit from it’s increased strength, without knowing anything about how it works, and without having to learn new ways to practice their chess. The interface stayed the same.

Surprisingly, AI failed to prove useful almost everywhere outside of chess, for more than two decades. The initial brute-force approach that worked so well in chess proved to be less applicable in other dynamic domains. It should be said that many, myself included, hardly feel it okay even to call the brute-force minimax approach used to play chess AI. It played chess (at a superhuman level), but it remains quite dumb in its inability to apply that knowledge anywhere else or adapt to any change in rules.

In my view, however, AI’s failure outside of chess was not due to core weakness. It had more to do with not finding that magical natural interface between AI’s insights and knowledge-employees who needed to consume them. While AI was, and remains, limited in its ability (in varying scenarios), surly chess was not the only profession that could be augmented with AI. Moreover, AI doesn’t need to be the best for it to become useful. The challenge is finding a way to use it where it’s good at, to make it useful. Finding a way that allows a knowledge-worker to interact with an AI’s output naturally and use its insights without requiring any change in the way they currently work was critical for AI’s success. And it was an avenue hardly being explored at all. * A side topic of the interface question is how to handle scenarios where AI gets things wrong gracefully, but that’s for another time.

A counterargument I received is that AI’s failure was not for lack of interface, rather because of its weakness and inability to adapt. Most AI models of that era had a “built-in” interface providing explainability. That is, an easy way for the AI to explain how it reached a particular conclusion, e.g., based on what data and which assumptions. That explainability was supposed to increase users’ trust in the AI’s results. That’s in very stark contrast to today’s opaque Deep Learning Neural-Network black boxes, where we need to implicitly trust without ever fully understanding how a deep-learning-based AI model has reached a particular conclusion.

I disagree with this argument. Explainability of the previous generations of AI was aimed more towards the developers of that AI and contributed little towards a seamless augmentation of anyone using it. Further, from personal experience — I do not need to understand why a chess AI says a particular board position is hopeless once I trust it is always correct. * It’s easy to see I’m highly doubtful of the current explainability trend in AI outside academia. I see no practical value in this direction.

I do agree that brute-force based AIs were weaker and less adaptive than current neural-network-based AI. This severely limited their usefulness and the domains for which they were relevant. However, my counter-argument to this claim is that most users (same as most chess players) do not need a “2,800” strong AI. Above some arbitrary strength threshold, all AIs feel equally “superhuman” strong to us, mere mortal humans.

Users (e.g., knowledge-employees) require an AI that seamlessly interfaces and augments how they work today. They do not need it to be the “world champion” of AI to agree to use it or for it to provide them substantial value. What they do need is for it to help them become more efficient, make smarter and faster decisions, offload mental tasks that can be addressed by the AI without their supervision, thereby clearing more thinking time for themselves. They don’t need a tool that forces upon them new ways of doing what they do today. Quoting Elad Walach, co-founder and CEO of Aidoc: “A key interface challenge is how to pose the right question to the AI. If you ask ‘the right question,’ you could get useful answers.”

As I started my career in Venture Capital in 2013 (seven years ago), the first signs of ‘Deep Learning’ as a breakthrough approach to AI began surfacing. AI was slowly coming out of the basement where the orphaned and disappointing children of AI computer science departments were relegated to during the long AI winter.

Deep learning (along with dramatically increased compute capacity and access to more data) was beginning to change that.

There was one history in the making moment in 2015. That’s when Deepmind (whom I consider to be the best technology acquisition ever done, closely followed by Google’s acquisition of Android and Facebook’s acquisitions of Instagram and Whatsapp) demonstrated its deep-learning neural network self-taught agent playing old Atari video games. Self-taught, it was gaining superhuman ability across multiple games, at a speed that made the ground shake. I am sure that anyone who saw this demo as I did had to A) pick up his or her jaw from the floor and B) start thinking about the impact this will have on other domains.

Entrepreneurs are always the first to identify significant technology breakthroughs that enable new incredible value creation possibilities and aim to launch startups based on that value.

As I started meeting startups adopting AI as their core technology stack, I was helped by my earlier thesis of “The Interface is the Message.” I trust Marshall McLuhan would excuse me here.

I spent much of my time searching for startups with the thesis of augmenting knowledge-workers abilities without changing their current ways of working, e.g., the processes, methods, and actual interface they were using to get their job done. I was looking for startups where the founders had the same strong innate sense that the interface is the message. Quoting Dr. Noam Solomon’s observation: “Startups that understood the rules of the game and how to interface with the players, but in order to be best-in-class wanted to employ super-human capabilities to augment them.” Similar to the natural interface that AI offered to chess players (requiring no changes in their process), that led to virtually 100% adoption of AI for all professional chess players, two decades earlier.

For me, the magic moment of deciding to invest the first seed money is a combination of a team I would sign up to work for if I were good enough for them to hire me (i.e., utterly blown away by them as people and as leaders), coupled with a thesis that has long been keeping me excited, applied to a sufficiently large market that either exists or better yet, one that we believe should exist.

Armed with this thesis, I was fortunate to meet and lead the seed investment rounds in Applitools (2014), augmenting software quality assurance with visual testing, using computer vision. Syte (2016), applying computer vision for fashion identification, driving new revenue opportunities from online traffic. Aidoc (2106), augmenting radiologist workflow for time-critical situations, saving patients’ lives. Buildots (2018), using computer-vision in the least tech-savvy domain of them all, construction. By applying AI computer vision, their product saves massive costs and time during complex construction projects by early detection of mistakes (vs. 3D plans of the site) and better tracking and coordination between multi-vendor efforts typical of construction sites. Granulate (2018), who uses AI for Real-Time Continuous Optimization of their client’s workloads, achieving dramatic cost reduction and latency improvements in their clients’ cloud environments. They accomplish that with zero code changes and zero effort required by their clients. Thanks to an AI that analyses and optimize what the ideal infrastructure should be and adapts it so. Immunai (2019), who is combining not one but two breakthroughs (Single Cell sequencing technologies and transfer & multi-task learning in AI) to map and decode the most complex system of them all, the human immune system, for better therapeutics. Deepcure (2019), applying AI to the broken drug-discovery process. And others (still under the radar).

Starting at day zero, all these founders discussed the importance of finding ways to augment and naturally integrate into their clients’ current working ways. While the complex AI stack they were developing was to be used as a Secret Santa that is operating hidden behind the scenes. If you’d like, the wizard-of-oz experiment, but in reverse. It’s about the value to the knowledge-worker, who couldn’t care less if it’s AI or a bunch of trained monkeys doing this for them.

Because… The Interface is The Message.

Closing the circle back to chess. Seeing chess, grey matter only with no AI involved, only the mental powers and attitude of a chess genius like GM Hikaru Nakamura, managing to draw vast crowds to chess on Twitch, is a good omen to Caïssa and my beloved game of chess.

PS. My thanks to Eliran Rubin, who suggested this post after finding chess on Twitch. To Dr. Noam Solomon, Elad Walach, Roy Danon, Asaf Ezra, Ohad Maislish, Miki Shifman, and Dr. Micha Y. Breakstone all for taking time from their incredible startups to review this post and provide well thought and often very opposing views. To Yonatan Mandelbaum and Brian Sack for brainstorming and their helpful suggestions.

PPS. An observation by Dr. Noam Solomon, co-founder and CEO of Immunai, I felt I must quote verbatim:

“The reason why chess is easy for ML is that you have a simple set of rules with a finite set of board arrangements and a well-defined, deterministic way to go from one arrangement to another. There is a clear definition of winner/loser and outcome, and “all you need to do” is run all possibilities on a large enough computer. That immediately guarantees that with enough computing power (and basic algorithms), you can solve the problem. But if you take other problems, like the immune system — this is not a well-defined game with clear board arrangements, inputs, outputs, and outcomes. There, you have many hypotheses to make and nobody can really say what the rules of the game are. This is exactly why it is called “artificial intelligence” because you need to create a truly new way to generate intelligence towards the problem.”

Dr. Noam Solomon’s observation of having to learn the rules while you play reminded me of the beautiful chessboard analogy of Nobel laureate Dr. Richard Feynman on understanding nature’s laws:

One way that’s kind of a fun analogy to try to get some idea of what we’re doing here to try to understand nature is to imagine that the gods are playing some great game like chess. Let’s say a chess game. And you don’t know the rules of the game, but you’re allowed to look at the board from time to time, in a little corner, perhaps. And from these observations, you try to figure out what the rules are of the game, what [are] the rules of the pieces moving.

You might discover after a bit, for example, that when there’s only one bishop around on the board, that the bishop maintains its color. Later on you might discover the law for the bishop is that it moves on a diagonal, which would explain the law that you understood before, that it maintains its color. And that would be analogous we discover one law and later find a deeper understanding of it.

Ah, then things can happen — everything’s going good, you’ve got all the laws, it looks very good — and then all of a sudden some strange phenomenon occurs in some corner, so you begin to investigate that, to look for it. It’s castling — something you didn’t expect.

We’re always, by the way, in a fundamental physics, always trying to investigate those things in which we don’t understand the conclusions. We’re not trying to all the time check our conclusions; after we’ve checked them enough, they’re okay. The thing that doesn’t fit is the thing that’s most interesting — the part that doesn’t go according to what you’d expect.