Don’t Call Me, I’ll Call You

On large language models, artificial intelligence, DeepSeek, and trying to find the middle lane between skepticism and surety. I mention bionic arms a lot for some reason.

Recently, I’ve been thinking a lot about bionic arms. You know, like the ones Jax has in Mortal Kombat. They’re kind of like regular arms, but they’re more powerful. He can do a lot with them. It makes him a more capable fighter.

This at-times-violent supercut from the 2021 film version of Mortal Kombat highlights this general point neatly:

In a real-world setting, this is how large language models, machine learning, and other similar tools would best be used. They make you stronger, more capable, and would represent more than the sum of their parts. In general, this is the mandate of all tech. The problem is, AI is often used as a way to help you coast, and the problem is, LLMs are not really useful “coasting” tools. They work best for brainstorming, to help you navigate complex tasks, and to solve difficult problems that might be above your pay grade. An LLM is perfect when you’re trying to use a tool in a way it’s never been used before, or if you need to program something but have no clue where to start.

But like bionic arms, they are not human, and they never will be. LLMs aren’t independent thinkers. They don’t have a heart. They just parse through commands with algorithms in a way that sounds intelligent, even at times confident. And that can cause people to not understand where it’s inappropriate to walk in with a pair of “bionic arms.”

On Bluesky yesterday, amid a messy political climate mildly described as “chaotic,” one prominent activist decided to use an LLM to essentially speed up a task of organizing federal agencies and departments affected by a “pause” in funding. (The pause, at least in part, has since been temporarily blocked by a federal judge.) They were perfectly capable of compiling this list themselves, but the choice to do so with ChatGPT upset users who felt that they were shirking responsibility for a basic journalistic task.

If you ask me, that’s a case of using your bionic arms like regular arms. Save them for a special occasion.

There are other examples of this. This morning I caught a video of a “business ideas” YouTuber who used the $200/month ChatGPT Operator tool to create a business where he reaches out to Facebook marketplace users with old pianos they can’t get rid of. He had Operator, which has access to a traditional web browser and can use it, message people on Marketplace and fill out a spreadsheet for him. He essentially turned ChatGPT into his virtual assistant—though he’s entirely on the hook for picking up the pianos and moving them around. I bet he wishes he had bionic arms.

Want a byte-sized version of Hacker News? Try TLDR’s free daily newsletter.

TLDR covers the most interesting tech, science, and coding news in just 5 minutes.

No sports, politics, or weather.

Subscribe for free!

Part of the problem, as stated previously, is that large companies have been making LLMs mostly for the management class, despite the fact that most of its tangible benefits are best experienced in more technical settings. The argument, inspired by a bizarre argument by the CEO of Zoom, feels even more true now than when I made it.

But AI is a complicated mess of wires, and it comes with serious ethical concerns, both for creative types and economically. The insane energy use and the wholesale theft of content are two things that critics can rightly point to when critiquing it.

However, a hole is starting to get punctured in one of those arguments. DeepSeek, a Chinese AI model built on a relative shoestring using last-gen technology, ended up outpacing many models built for tens or even hundreds of millions of dollars. Depending on whom you ask, it’s a story of the little guy winning or yet another example of China ripping off American ingenuity.

It’s an effective tool, and it was built on the cheap, undercutting claims of energy overuse. And in recent days, amid the chaos that has colored our recent political decision-making, a fresh AI development caused the pecking order, at least in the U.S., to come crashing down.

(Even though DeepSeek built its training set using synthetic data, there’s still room for the content-theft argument—just not in the way you’d think. See, OpenAI is implying the theft of their model. Goddamn rich.)

Much has been said about AI being good or AI being evil, it being a grift or a sea change. When Casey Newton wrote about it last month, attempting to find a way to claim there may be something to this crazy thing, he ended up frustrating the skeptics deeply. But one thing he did probably get right is this line: “There is an enormous disconnect between external critics of AI, who post about it on social networks and in their newsletters, and internal critics of AI—people who work on it directly, either for companies like OpenAI or Anthropic or researchers who study it.”

If a bionic arm allows you to more effectively do fatalities, and your full-time job is to do fatalities, maybe consider it.

However, I think there’s a disconnect he missed that he didn’t talk about. Certain user types are getting real value out of this stuff from a technical standpoint, because they’re treating these tools like bionic arms—reaching slightly beyond their skill set to try to do something that they would not be able to do without their “bionic arms.” Those people might be working in areas like technology or development, where having a helping hand could help them reach beyond their existing capabilities. But because they know that these arms are synthetic, they are more likely to question the results, helping to avoid some of the dicier traps that LLMs create. If you’re not working in a technical area like data science or engineering and are only experiencing AI from the perspective of how it’s being pitched by companies like Google and Microsoft—as executive stuff, essentially—you may feel like it’s being oversold.

It is being oversold, and not just in the way critics can easily point to. When I first got into visual journalism as a newspaper designer, I was constantly told to be careful not to overuse tools like Photoshop, especially their built-in features. The reason was twofold: First, it was very obvious if you used their default filters, and second, it could threaten to become a crutch in the wrong hands.

(Adobe itself must not have heard this advice, given how quickly it has jumped into the AI realm and upset its longtime users.)

I think LLMs becoming a crutch is a real risk right now for end users, who may be using specialized tools of this nature for the first time. Not helping is that these tools are being put in wholly inappropriate settings. I didn’t ask for Google Gemini to show up in my search results. I didn’t ask Apple Intelligence to help me make a funny custom emoji, or for a Copilot button to replace one of my modifier keys on my new laptop. But if I load up Claude and tell it to do something for me, I did ask for it—and that distinction matters.

But companies heavily invested in AI basically have to put it everywhere to cover the high costs of this technology they already built. They essentially force it because they want to prove that they didn’t waste their time or our energy. It’s why the stock market lost its marbles when DeepSeek emerged as a true player on the cheap—it looked increasingly clear these companies might have overspent on NVIDIA GPUs.

But that’s their problem, not yours. As an end user, I recommend following the “bionic arm” policy to leveraging AI. If you can use it as an extension of what you’re already doing in a way that doesn’t feel ethically compromised to you, it’s by all means on the table. But just using bionic arms because you have them, as a way to show off? That’s minor-league bionic arm stuff.

As for companies that force these tools into settings where they don’t belong: Don’t call me, I’ll call you.

More AI Stuff

Here’s an example of how not to use your bionic arms: A guy decided to launch a network of AI-generated newsletters for local communities around the country. Yeah, less of this.

MKBHD, shaking off a tough 2024, gives us a look at a pair of augmented reality glasses (remember those?) made by Samsung with Google Android software. I watch this video and kind of see a future where both AI and AR get used together, and it makes them both make more sense.

And yes, The Vatican has an opinion, too. “As in all areas where humans are called to make decisions, the shadow of evil also looms here,” it said in a statement shared this week.

--

OK, we’ve covered our AI budget for at least the next three months. Hopefully we can talk about something else after this.

Find this one an interesting read? Share it with a pal! And back with an AI-free topic in a couple of days.