AI vs Senior Coder

AI vs Senior Coder

The Great Debate

To get as close as possible, Tesla's engineers wrote 300,000 lines. But the car still doesn't move at all. Suddenly accelerate and turn over. All kinds of unusual things. After the test of KM2 and KM3, you know that this is not the real world of driving logic. In short, it lacks human taste. But if you really want to drive like a human, it takes about 100 billion lines of code. This is the Moorlach's theory.

Challenges in AI Perception

In theory, chess, these advanced intelligence, computers can be easily changed. But when it comes to perception and sports cooperation, these low-level intelligence that even human babies know, computers are stupid. No wonder McKinsey's survey report shows that although consumers are interested in autonomous driving, the number of paid members is decreasing. The first-tier cities are particularly obvious. Is this the middle ground of autonomous driving? I think I can still save it.

Advancements in AI Models

2023 is still half way through. Musk predicts that Tesla will achieve fully autonomous driving later this year. This time is not a brawl. According to the data, the official FSD-V12 update will form a number of times when users are completely unaccessible. From 47% to 72%, the average acceptance rate is 116 per cent, which has been upgraded to 333 per cent. More importantly, this update has been widely acclaimed, as if people were driving. From machine to human, how many lines of code must be written? Wrong. More than 200,000 lines of code were deleted. So what is the variation?

Data-Driven Approaches

Back to the year of invention in 2023, the answer has already been called out. Compared to the rule of drive, large models and autonomous driving have been transferred to data drive. After all, the real world is a mess.

AI in Action

Think about the engine operating at an intersection. It can't die of logic. The good thing about large models is that it avoids the loss of information from artificial infantile code, which helps to capture a dynamic, non-linear mode and relationship through a large amount of data. FSD's strategy is to watch video, to watch the real driving video. He realized that there are so many human things to do. For example, Amazon delivery trucks that flash on the roadside, This is the opposite car, just walk around. Now, the bicycle, the cattle on the crossroad, the jewelry, Automatic driving can be recognized.

The Future of AI and Driving

Not to mention these, the big model leader, OpenEye's ambition, Let the computer understand our world. It's a simulation world. Although every video can be picked out wrong, But his promotion SORA not only knows the concept of Apple, Also know what Apple looks like, It can also understand the movement trajectory of Apple falling from the tree. The founder has made it clear. When we go to a large neural network To accurately predict the next word in a variety of different documents, He is actually printing the real world. This is also very good.

As long as the computing power, let alone the automatic driving, The common artificial intelligence are all available. But you can see that even if you use a large model, Automatic driving is still not easy enough. Musk said, a model solved the A problem, But it may lead to the B problem. For this, AI education has long been expected. AI's current intelligence level is very poor. I said that SORA can't understand the world. Because the large model can understand data, But it does not have the ability to predict scenes that have not been seen. This imagination and brain capacity are unique to humans.

The Importance of Prediction

Why is the prediction so important? Humans are known to study efficiently. An important reason is the cause-and-effect relationship between brain, behavior and result. Even if you meet a new situation, You can still predict the consequences and adjust your behavior. This is called counter-factual reasoning. Think carefully, do you learn a lot of skills like that?

New Models and Approaches

On the day of the release of SORA, YANG LI KUN's META launched a video prediction model, VJEPA. It is called a view of the world in a human way. He said that the automatic driving end could only be a model of the world. This is another route for driving automatically. Isn't your big model just predicting what the next word is? This is called playing with probability. It can be related to the real world. What does it have to do with the connection between the real world and the UK? This is the world model. It does not pretend to know the physical world by stacking data. But it really interacts with the physical world.

The sensor of the vehicle is the human sensory system. Send the parameters of the three worlds to the model in real time. Then, it is to predict the current state. Simulate the choice of different actions. The best driving strategy. Although it is a little late to drive automatically, In the world model, the domestic car has caught the wind. The entire car industry is exploding. For example, NWM can generate 216 predictions within 0.1 seconds. Comparing one-to-one, the best decision is determined.

Comparing Predictions: AI vs Human

It is not above or below the old driver's prediction ability. Even the problems that humans can't solve can be thrown to him. You have to figure out how to avoid this accident. Real driving is always aware of the decision. This decision can even be done in a second. For example, before encountering the pit, the chassis has been adjusted. This is not only because of the smartness of the world model, But also because of the super low delay of the system. Direct experience is fast.

After all, even 30 milliseconds of communication delay. For a car that is 120 kilometers per hour, It means that it has already driven one meter. The reason for the delay is the various functional modules of the car. For example, as the sensor system of the eye, Comparing the brain's decision system, The control and execution systems are in each and every one. At the end, there is no highest commander's prediction. In the model of Ford, The whole car controls the hardware software has 150 suppliers, Which is equivalent to 150 code structures.

Towards a Unified System

In the future, it will directly connect the base operating system. Don't worry about your software hardware, The whole car has a single model suspension, After the command is sent, the wheel is open. This is the first whole car power system in the car market. SkyOS Now, the future brand LeDaw has been fully equipped with SkyOS. It has four operating systems. Covered with MCU chip, Car control, Rear-mounted and smart storage. With the world's first car-regulated 5nm high-performance chip. Shorten the task time by 50%.

The whole car's disk consumption is 15% at a time. Complete the KVM used by Google and Amazon. Will this be the end of the autonomous driving? Not necessarily. Now, the unmanned car is starting to take off in the park. Although the radish is clumsy, It is also becoming a travel option for more people. After that, there will be a unmanned bus. No one knows what the future will be like. But I know that the future is here.

Takeaway

Large models and data-driven approaches are closing gaps, but embodied prediction, scene understanding, and low-latency control remain essential. The most robust solutions will combine models, sensors, and human-centered design.

Case Studies: Successes and Failures

Real-world deployments illustrate both promise and limits. Waymo's controlled geofenced services show that careful operational design and conservative routing can yield reliable autonomy in constrained domains. Conversely, high-profile incidents reveal how edge cases—unexpected pedestrians, construction zones, or atypical vehicle behavior—can defeat brittle rule-based systems or poorly generalized perception models.

These case studies underline that operational design (ODD), rigorous testing, and conservative fallback logic are as important as the underlying model accuracy.

Human–AI Collaboration

Rather than viewing AI as a replacement for experienced drivers, many experts argue for hybrid workflows: AI handles low-level perception and routine control, while humans supervise, manage exceptions, and provide strategic decisions. In industrial settings, this collaboration model reduces risk and leverages complementary strengths.

Designing effective human–machine interfaces requires clear intent communication, predictable behavior, and graceful handover procedures to avoid confusion during transitions between automated and manual control.

Engineering Challenges & Infrastructure

Scalable autonomy demands more than better models: sensor redundancy, deterministic compute stacks, low-latency networks, and standardized safety architectures are necessary. Software modularity, formal verification of safety-critical components, and secure update channels are engineering priorities that determine whether an algorithm can be safely deployed at scale.

Edge compute (onboard accelerators), time-synchronized sensor buses, and rigorous hardware-in-the-loop (HIL) testing pipelines help close the gap between lab prototypes and production vehicles.

Regulatory and Ethical Considerations

Regulators are asking for explainability, reproducible validation, and incident reporting. Ethical questions—responsibility allocation, bias in training data, and access to safe mobility—require transparent governance and multi-stakeholder input to align technology with public values.

Standards (e.g., ISO 26262, SOTIF) and new frameworks for AI assurance are emerging to provide consistent expectations for manufacturers and operators.

Practical Recommendations

  • Define and publish the operational design domain (ODD) for any deployed system.
  • Invest in diverse, representative datasets and continuous validation against corner cases.
  • Use redundancy in perception and control to mitigate single-point failures.
  • Design clear alerts and handover paths for human supervisors, including time budgets for response.

Expanded Takeaway

AI-driven driving has advanced rapidly, but the remaining challenges are systemic, not merely algorithmic. Embracing integrated engineering, conservative operational designs, and robust human–AI collaboration will determine whether autonomy realizes its promise at scale.