The Musk of Modern AI

Sam Myres
6 min readMar 15, 2024

There’s something you should know about AI. Elon’s not telling the truth, but that’s not it that’s just a dependent fact. Whether he’s deceiving himself or not, I don’t have too much bandwidth to care. What is obvious to me is that people are not aware of the dimensions of complexity in AI.

Dall-E’s interpretation of this blog post

Dimensions are interesting. You can stack them until you’re dizzy and nothing inherent in the math tells you that they can’t keep overlapping, multiplying the information stored in a single point in the space. The flatlanders from the book “Flatland” might only be able to conceive of and operate in their zipper-driven 2d or the lower 1d worlds, but that doesn’t stop us up here in 3 dimensions from both understand their world and ours and having that additional depth perception. And even though I don’t know how to imagine what it would look like, a fourth dimension is somewhat within grasp, given we experience time in sort of slices, moments. Beyond that I just really struggle to imagine. A new dimension at 90 degrees from all the existing ones somehow? just, how?

AI has many dimensions, but not in space, but complexity. No, not space complexity. There are multiple formulations of the dimensions of AI agent complexity and here I go through one, first describing them and then relating them loosely with what I understand to be true about Tesla FSD.

Modularity, succinctness, planning horizon, uncertainty, goals/preferences, number of agents, ability to learn and the bounds on other agent’s rationality all combine to make things like driving a car or making a pancake hard as f***.

Modularity:

if you can’t reduce your problem into modular parts, removing the complexity of each in the view of the others, many computational problems become next to if not impossible to reason about. Sometimes these modules share a single playing field, sometimes they get hierarchical.

Succinctness/Expressiveness:

Compacting information into models is the name of the game in AI. That information’s encoding approach determines the expressiveness, or the amount of states/information it can represent, but whether or not you can compact your inputs to fit your model’s input is a problem; see LLM’s constant push to accept larger sets of “tokens” or text.

Uncertainty:

There are both sensing and effect uncertainties. Sensing uncertainty means you’re not totally sure if you actually saw what you think you saw or whether it was 4cm or 4m away. Effect uncertainty means you’re not sure whether your AI agent’s actions actually took effect or not.

Goals/Preferences:

Some logical goal states are just formulas expressing an expected set of states in parameters. Others are increasingly fuzzy, from being defined as getting to a location on a map with a set of preferences on fuel use, toll roads, etc. all the way up to what I guess we would now consider answering a prompt.

Number of Agents:

If you’re playing sudoku the possibilities are all well known. If you’re playing checkers and the other agent isn’t perfectly rational, the game gets much harder to fully grasp. As soon as any additional agent is added, in fact, you have to reason about their rationality and you don’t have access to the inputs.

Ability to learn:

If you can’t learn, in a changing environment or one with other learning agents, you’re nearly doomed unless it’s some kind of peaceful no-goals type of environment. It’s not something AI does super well without supervision or rather types of meta-supervision where the outcomes are measured by some automated metrics.

Bounds on Rationality:

if your opponent is perfectly rational, they will be make determinate responses to your actions, greatly narrowing the possibilities. If you have to deal with very imperfect information about their state, such as in a partially rather than fully observable environment, even rational actors could become the subject of bounded rationality or effect uncertainty.

All of these mercilessly combine in “Full Self Driving” and many other dreams of AI

Modularity is not a solved problem, many of the systems operating today are taking up more modular internal structure, but many others are still black boxes. I couldn’t tell you what the case is for Tesla as it’s probably the most guarded secret in the company, but chances are they haven’t completely hacked the fix on this just yet.

Succinctness/Expressiveness is another thing we can’t directly observe but we can know that many things confuse these AI drivers, including but not limited to construction zones, bollards, street trolly infrastructure and more. At the very least they’re a long way from expressing every possible driver environment or observing it inside their models.

Uncertainty means near doom for FSD. You cannot know the actions of the human drivers, and some have even shown animosity against the Telsa cars sitting perfectly still. No doubt others are testing the tesla’s reflexes on the road and road raging against the behaviors of the car and the drivers just the same. I can’t imagine that this is anything but a confounding variable when you’re trying to train on real world data and make decisions for ever possible case. Some of these edge cases are so rare I struggle to believe you could collect enough data to meaningfully train on the situations/correct actions to take.

Goals get about as fuzzy as they can when construction brings in exceptionally awkward implementations of traffic flow or unexpected traffic lights, etc. The goal has to be “get to this GPS position and break no laws to the extent possible” which is actually a goal defined by the negation of a trillion possibilities, even if it’s not explicitly coded that way. I’m sure there are heuristics “stay in your lane wherever possible”, the priority of cars stopped at the same time at an all-way stop, etc. But those are narrow situations, how many situations will you even find yourself in that lend themselves to nice rules?

The number of agents you’re dealing with is stupid. At interstate speeds (85 mph, in some places, legally) trying to see past the cars in front of you is somewhat important if you’re stuck in a pack of cars and you don’t want to be in a pileup. Because you can’t always directly see these cars, you’re going to have a bad time because Tesla has removed the most useful intersecting perceptive peice of their puzzle, radar. Something something, “it’s too expensive”. There’s a reason Waymo is stuck on lidar, and it’s not because they’re incompetent.

It’s not capable of learning or being taught on the fly. At best you report when autopilot makes a severe error and maybe someone at Tesla labels it and it makes it into the training set, but how long will it be before that case is actually solved and why would you want to operate something completely incapable of being corrected? I mean, are you going to stop, reboot the car and hope you cleared some buffer overflow, even if you know what I’m talking about?

Bounded rationality is as much of a problem as it could seem, and where it doesn’t compound with effect uncertainty or partially observable environments directly they almost overlap in to one big “dunno”.

Overall, FSD is as f***ed as Musk’s yearly promises to the fanboys. It’s dying, slowly, under the weight of the knowledge that he was promising it would be finished in 2017 or something and every year since that ball has been kicked another single year down the road. At one point, the guy claimed that buying any other car was a nightmare decision because he was going to sell you a robotaxi next year that would make it’s money back in a years time! Imagine discovering that and not keeping it for himself!

I fear many years from now this entire era while I finished college, entered the workforce and started observing the world from the standpoint of an adult assumed to be in part responsible for where it goes, is going to be viewed as a f***ing circus.

At least I can juggle and ride a unicycle. Maybe I’ll fit in after all.

I gots 1 citation for ya. https://www.cs.ubc.ca/~mack/Publications/PCAR06.pdf

--

--

I'm a Software Engineer, FAA 107 Drone Pilot and Radio Amateur. I write about things related to SWE and Tech and my own projects.