Charles de Gaulle’s famous adage about politics, that “Politics is too serious a matter to be left to the politicians,” has been customized to many fields that we need to protect from professionals. Some found national security should not be left to the generals; some even thought that international relations should not be left to the diplomats.
Seriously speaking, until last week, Palantir, the Big Tech company that had bought an ad in the Sunday New York Times three years ago to show its support for Israel, published a 22-item manifesto on social media platforms. I had not given much thought to whose hands the artificial intelligence (AI) should be in.
Yes, Palantir had a formal partnership agreement with Israel's Ministry of Defense, and it used to supply technology that is actively used in the war in Gaza. But who doesn’t?
Peter Thiel, a German American entrepreneur, venture capitalist and co-founder of PayPal (1998), Palantir Technologies (2003) and Founders Fund (2005), was also the first outside investor in Facebook (2004). With an estimated net worth of $30 billion and among the 100 richest individuals in the world, Thiel said, “I defer to Israel” when his full page NTY ad in support of Israel was published, but who doesn’t in America?
But that 22-item manifesto changed everything. In the news items, it was said that Palantir’s post summarized its CEO, Alex Karp's, book, which compiles not only Karp's long-held beliefs but also Thiels’ idea that tech hasn't done enough for U.S. security. One of the points suggests that the United States should reinstate a draft for military service.
Personally, I have been studying Big Tech’s big jump from machine learning to AI as part of IT-related teaching jobs, and I was aware that some of my students have been using chatbots to get their essays written for them, and that the school’s IT department came up with tools to detect AI-written homework. I have been reading crazy news stories about people making chatbots their doctor, and instead of going to the doctor’s office or hospital clinic, many people are asking their health-related questions to ChatGPT, Gemini and other AI portals.
As we do in school, you tell the people not to do it: AI would simply copy from other people’s work when writing your essay. Essentially, it would be plagiarism, a dishonesty that you shouldn’t associate with yourself. If they cannot steal from other people’s work, AI programs fire from the hip! Asking an AI engine health questions would invoke a major database search through a machine that can learn (that is, AI, basically). If it finds the answer, it would copy and paste! If no answer is readily available, it would recommend taking that pill and calling it in the morning – if you are still alive.
If AI’s intrusion into our lives was limited to students’ homework or asking Gemini if that growth on your back was cancer, it could be OK.
But imagine a company, which claims, “Our software powers real-time, AI-driven decisions in critical government and commercial enterprises in the West, from the factory floors to the front lines,” publishes a manifesto, a political manifesto, asking the U.S. to have global military dominance. That dominance would depend on Palantir-manufactured AI weapons.
Some members of the United Kingdom Parliament tried to belittle Palantir, its chairman Thiel and its CEO Karp as “a parody of a RoboCop film” and “the ramblings of a supervillain,” but this is a company that touts to be “a proud to partner with” Pete "The Drunk" Hegseth’s "War" Department, and the U.S. Department of Agriculture to “secure our nation's breadbasket” and its motto is “Bad times are good for Palantir.”
In its public manifesto, Palantir demanded that the U.S. reinstate a military draft, saying that “free and democratic societies” need “hard power” in order to prevail. Why does a Big Tech company, with a name and a logo snatched from J.R.R. Tolkien’s "Lord of the Rings" (“palantir” is the “seeing-stone” in the trilogy, and the stylized circular orb with a triangular negative space in its logo is supposed to evoke Tolkien’s metaphors), would like to see the U.S. have more “hard power” to prevail? Prevail in or against what?
Palantir provides military vehicles to the U.S. armed forces, and it signed a strategic partnership for battle technologies with Israel to provide services to the Israel Defense Forces (IDF) for “its war-related missions.”
This month, an attacker hurled a Molotov cocktail at the San Francisco compound of Sam Altman, OpenAI’s chief executive. The attack shook the AI world, perhaps because it happened right after War Secretary Pete Hegseth single-handedly prohibited another AI tech giant, Anthropic, use for military operations. The company was founded five years ago by former members of OpenAI, including siblings Daniela and Dario Amodei, president and CEO, respectively. The company, behind the large language model series Claude, has an estimated value of $380 billion.
Hegseth had the Department of Defense officially declare Anthropic, a supply chain risk, after the company refused to give defense agencies unlimited access to its AI tools for mass surveillance and autonomous weapons.
Apparently, Hegseth and other goons of U.S. President Donald Trump had been planning to use AI in their domestic surveillance plots and the feeling inside Anthropic was that their AI should not be used by any government for domestic political purposes. Also, Anthropic people believed that they were disliked by the Trump administration, as its executives were not among the tech leaders to donate large sums to Trump or publicly praise him.
Anthropic, claiming that the Pentagon's actions violate free speech and due process rights, went to the court, and a federal judge told the government it could not immediately enforce a ban on Anthropic's tools. Now Anthropic says it is open to negotiations despite the lawsuit.
The whole thing, first the Pentagon’s attempt to get unfettered use of AI in domestic surveillance and then an attempt to kill people or cause harm at OpenAI’s San Francisco compound, is disturbing.
Up until recently, almost everyone in the world viewed tech companies positively. Big Tech does not make much sense as an institution that would serve people. In the eyes of ordinary people, it is heading toward deepfake content, digital communications laced with fraud and scams. Seventy-seven percent of Americans believe AI could pose a threat to humanity in the hands of multi-billionaire owners.
SpaceX is trying to buy Cursor, a startup for code editing AI, for $60 billion because Elon Musk wants to catch up with AI rivals. For some strange reason, SpaceX, a rocket-launching space company, wants “to create the world’s best coding and knowledge work AI.” Anysphere, Cursor’s parent company, has expertise in AI coding tools. OpenAI is in talks to commit up to $1.5 billion to a private-equity joint venture startup.
But back to General de Gaulle’s dictum! Why do you think that military men believe in leaving politics, the activities associated with the governance of a country or area, only in the hands of a select group whose sustenance depends on what they do for the country? The same rationale applies here: The AI is also too serious a matter to be left to those young (and not so young) techies with too many billions of dollars but no societal responsibility, only aimed at improving their own status in that industry or increasing their status with politicians, to influence legislation, regulations, government decisions and actions.
The way things are going, we might get all our school homework written or get our aches and pains diagnosed by AI. Its essays and diagnosis should be free, what Gil Duran calls in his new book titled "The Nerd Reich, Silicon Valley Fascism and the War on Democracy": the tech-authoritarian movement.