On the Need for an AI Public Option

Artificial intelligence will bring great benefits to all of humanity. But do we really want to entrust this revolutionary technology solely to a small group of US tech companies?

Silicon Valley has produced no small number of moral disappointments. Google retired its “don’t be evil” pledge before firing its star ethicist. Self-proclaimed “free speech absolutist” Elon Musk bought Twitter in order to censor political speech, retaliate against journalists, and ease access to the platform for Russian and Chinese propagandists. Facebook lied about how it enabled Russian interference in the 2016 US presidential election and paid a public relations firm to blame Google and George Soros instead.

These and countless other ethical lapses should prompt us to consider whether we want to give technology companies further abilities to learn our personal details and influence our day-to-day decisions. Tech companies can already access our daily whereabouts and search queries. Digital devices monitor more and more aspects of our lives: We have cameras in our homes and heartbeat sensors on our wrists sending what they detect to Silicon Valley.

Now, tech giants are developing ever more powerful AI systems that don’t merely monitor you; they actually interact with you—and with others on your behalf. If searching on Google in the 2010s was like being watched on a security camera, then using AI in the late 2020s will be like having a butler. You will willingly include them in every conversation you have, everything you write, every item you shop for, every want, every fear, everything. It will never forget. And, despite your reliance on it, it will be surreptitiously working to further the interests of one of these for-profit corporations.

There’s a reason Google, Microsoft, Facebook, and other large tech companies are leading the AI revolution: Building a competitive large language model (LLM) like the one powering ChatGPT is incredibly expensive. It requires upward of $100 million in computational costs for a single model training run, in addition to access to large amounts of data. It also requires technical expertise, which, while increasingly open and available, remains heavily concentrated in a small handful of companies. Efforts to disrupt the AI oligopoly by funding start-ups are self-defeating as Big Tech profits from the cloud computing services and AI models powering those start-ups—and often ends up acquiring the start-ups themselves.

Yet corporations aren’t the only entities large enough to absorb the cost of large-scale model training. Governments can do it, too. It’s time to start taking AI development out of the exclusive hands of private companies and bringing it into the public sector. The United States needs a government-funded-and-directed AI program to develop widely reusable models in the public interest, guided by technical expertise housed in federal agencies.

So far, the AI regulation debate in Washington has focused on the governance of private-sector activity—which the US Congress is in no hurry to advance. Congress should not only hurry up and push AI regulation forward but also go one step further and develop its own programs for AI. Legislators should reframe the AI debate from one about public regulation to one about public development.

The AI development program could be responsive to public input and subject to political oversight. It could be directed to respond to critical issues such as privacy protection, underpaid tech workers, AI’s horrendous carbon emissions, and the exploitation of unlicensed data. Compared to keeping AI in the hands of morally dubious tech companies, the public alternative is better both ethically and economically. And the switch should take place soon: By the time AI becomes critical infrastructure, essential to large swaths of economic activity and daily life, it will be too late to get started.

Other countries are already there. China has heavily prioritized public investment in AI research and development by betting on a handpicked set of giant companies that are ostensibly private but widely understood to be an extension of the state. The government has tasked Alibaba, Huawei, and others with creating products that support the larger ecosystem of state surveillance and authoritarianism.

The European Union is also aggressively pushing AI development. The European Commission already invests 1 billion euros per year in AI, with a plan to increase that figure to 20 billion euros annually by 2030. The money goes to a continent-wide network of public research labs, universities, and private companies jointly working on various parts of AI. The Europeans’ focus is on knowledge transfer, developing the technology sector, use of AI in public administration, mitigating safety risks, and preserving fundamental rights. The EU also continues to be at the cutting edge of aggressively regulating both data and AI.

Neither the Chinese nor the European model is necessarily right for the United States. State control of private enterprise remains anathema in American political culture and would struggle to gain mainstream traction. The tech companies—and their supporters in both US political parties—are opposed to robust public governance of AI. But Washington can take inspiration from China and Europe’;s long-range planning and leadership on regulation and public investment. With boosters pointing to hundreds of trillions of dollars of global economic value associated with AI, the stakes of international competition are compelling. As in energy and medical research, which have their own federal agencies in the Department of Energy and the National Institutes of Health, respectively, there is a place for AI research and development inside government.

Beside the moral argument against letting private companies develop AI, there’s a strong economic argument in favor of a public option as well. A publicly funded LLM could serve as an open platform for innovation, helping any small business, nonprofit, or individual entrepreneur to build AI-assisted applications.

There’s also a practical argument. Building AI is within public reach because governments don’t need to own and operate the entire AI supply chain. Chip and computer production, cloud data centers, and various value-added applications—such as those that integrate AI with consumer electronics devices or entertainment software—do not need to be publicly controlled or funded.

One reason to be skeptical of public funding for AI is that it might result in a lower quality and slower innovation, given greater ethical scrutiny, political constraints, and fewer incentives due to a lack of market competition. But even if that is the case, it would be worth broader access to the most important technology of the 21st century. And it is by no means certain that public AI has to be at a disadvantage. The open-source community is proof that it’s not always private companies that are the most innovative.

Those who worry about the quality trade-off might suggest a public buyer model, whereby Washington licenses or buys private language models from Big Tech instead of developing them itself. But that doesn’t go far enough to ensure that the tools are aligned with public priorities and responsive to public needs. It would not give the public detailed insight into or control of the inner workings and training procedures for these models, and it would still require strict and complex regulation.

There is political will to take action to develop AI via public, rather than private, funds—but this does not yet equate to the will to create a fully public AI development agency. A task force created by Congress recommended in January a $2.6 billion federal investment in computing and data resources to prime the AI research ecosystem in the United States. But this investment would largely serve to advance the interests of Big Tech, leaving the opportunity for public ownership and oversight unaddressed.

Nonprofit and academic organizations have already created open-access LLMs. While these should be celebrated, they are not a substitute for a public option. Nonprofit projects are still beholden to private interests, even if they are benevolent ones. These private interests can change without public input, as when OpenAI effectively abandoned its nonprofit origins, and we can’t be sure that their founding intentions or operations will survive market pressures, fickle donors, and changes in leadership.

The US government is by no means a perfect beacon of transparency, a secure and responsible store of our data, or a genuine reflection of the public’s interests. But the risks of placing AI development entirely in the hands of demonstrably untrustworthy Silicon Valley companies are too high. AI will impact the public like few other technologies, so it should also be developed by the public.

This essay was written with Nathan Sanders, and appeared in Foreign Policy.

Posted on June 14, 2023 at 7:02 AM33 Comments

Comments

Fabri June 14, 2023 7:51 AM


Chip and computer production, cloud data centers, and various value-added applications—such as those that integrate AI with consumer electronics devices or entertainment software—do not need to be publicly controlled or funded

Wouldn’t be better though to have the entire supply chain under control ? What if cloud data centers vendors will spy (or even worse, manipulate) on public AI ?

K.S. June 14, 2023 7:52 AM

“Elon Musk bought Twitter in order to censor political speech”

Such partisan hyperbola is ridiculous overstatement. “In order” implies intent, thus asserting that Elon Musk intended to censor political speech and as means of achieving that goal bought Twitter. This is not supported by any known facts. More so, the hidden premise that prior to Elon Musk buying Twitter political speech was not censored on that platform is also demonstrably inaccurate.

An objective assessment would be: As a consequence of Elon Musk purchase of Twitter the censorship of political speech changed it affiliation from Left to Right.

Winter June 14, 2023 8:39 AM

@K.S.

This is not supported by any known facts. More so, the hidden premise that prior to Elon Musk buying Twitter political speech was not censored on that platform is also demonstrably inaccurate.

The first thing Musk did was to censor journalists covering him or his companies:
‘https://www.nbcnews.com/tech/social-media/twitter-suspends-journalists-covering-elon-musk-company-rcna62032

Musk’s twitter also censors anything left-wing:
‘https://theintercept.com/2022/11/29/elon-musk-twitter-andy-ngo-antifascist/

‘https://www.msnbc.com/opinion/msnbc-opinion/elon-musk-twitter-censor-left-accounts-rcna59638

‘https://jacobin.com/2022/11/elon-musk-twitter-crackdown-left-wing-accounts

On the other hand, hate speech got free reign:

‘https://www.nytimes.com/2022/12/02/technology/twitter-hate-speech.html

‘https://www.politico.com/news/2022/12/02/musk-spars-with-advocates-over-hate-speech-on-twitter-00071966

To summarize, when Elon Musk bought Twitter, hate speech was curtailed and people inciting violence and intimidation, or were peddling dangerous fake news were banned. After Musk was at the Helm, those covering Musk or his companies and anything left of Musk’s own political ideas was censored, but hate, violence and intimidation were given free reign.

From censoring violent revolutionaries to censoring those who want to protect the weak and disenfranchised. That is what Musk intended.

Andy June 14, 2023 10:40 AM

Oh the US Gov, who told Twitter to shut down vaccine critics and became the arm of Big Pharma wasting 4 trillions on Coronomania…

Random Nobody June 14, 2023 11:37 AM

@Winter – not sure MSNBC or NY Times articles are proving anything other than the veracity of these propaganda outlets.

Elon buying twitter pulled back the curtain on government sanctioned censorship of free speech, which should be a much bigger concern than one guy censoring things he doesn’t like about private companies.

It blows my mind how many people are willing to look past the sins of one party in order to validate whatever preconceived beliefs they have.

Al Sneed June 14, 2023 11:48 AM

I broadly agree with this essay, but you should rewrite the second paragraph. There are plenty of examples of big tech screwups without resorting to polarizing and widely disputed rhetoric. The language you’re using is great for energizing your already existing base, but not for building a new coalition to address AI issues that cut across our current political affiliations. You’re risking throwing the entire initiative into the culture war and rendering it ineffective. The comments above are already nitpicking your jabs across the aisle.

Clive Robinson June 14, 2023 12:17 PM

@ Bruce, ALL,

Re : Technology has more than one edge.

“Artificial intelligence will bring great benefits to all of humanity. But do we really want to entrust this revolutionary technology solely to a small group of US tech companies?”

There are two points to consider in that statment and question, and they are not unrelated, in fact one mau be causal in the direction of the other.

The general point is,

“As of yet AI as a technology still has not got it’s feet on the starting blocks.”

But more pointedly,

“AI is already being used for what society regards as bad if not evil intent.”

Hence all the noise about AI being an existential threat.

However if we look behond the curtain of the bad, that bit about “US tech companies” can be seen as the causal agent of the bad.

This “small group of US tech companies” all have quite bad reputations for the ways they behave towards society. In short they are seen as,

1, The key enablers on “Surveillance Capitalism”.
2, The key enablers of the systems that turn raw surveillance data into capitalist gain.

As such they appear to have no morals.

But if I was to say,

“It’s not just a ‘small group of US tech companies’ doing it.

That is whilst the group is small and it is mainly tech companies, other nations like Australia, China, France, Israel, Italy, Japan, … UK, et al are doing it as well but in slightly different ways…

The obvious question would be,

“Why the differences?”

To which the answer is,

“It’s what their National legislation both requires and alows”

As US National legislation has certain requirments, and is way more “one eyed” in other areas, then those “US tech companies” can be less circumspect or more correctly “More bull in a China Shop” in their behaviours.

Nearly all the fears about AI being existential, could be stopped in their tracks by just a few changes in legislation.

For instance giving individuals the rights to their information, not to who does a snatch and grab often under coercion (see US insurance forms, and forms for opening bank accounts etc).

A real right to “Privacy” that the “Might is Right” “Guard Labour” types can not jackboot over would also significantly help.

Society is shaped by legislation, and bad legislation results via “cause and effect” in a bad society.

The US has a huge amount of very bad legislation that has been bought, not by the voters, but a cleaque of very short term thinking by currently incurable “dark tetrad” mentalities that see society not as something benificial to all, but as prey to be exploited…

Unless society fixes this issue, then yes AI will become existential to the society we would wish to live in. Even though it won’t of necessity be existential to mankind (unless some real idiots give AI physical agency with WMD, which sadly based on history appears more likely than not).

yeah yeah, bad. June 14, 2023 2:50 PM

AI, oversold hype, when massive communications failure during SARS2, of simple clean central dashboard of threats.

just like 84 sars1, titanic DOD sgml, 2 the http agile world, yeah, there.
gates was a leader then.

put the coffee cup back on the table, please.

History, repeats itself.

Winter June 14, 2023 4:15 PM

@Random Nobody

not sure MSNBC or NY Times articles are proving anything other than the veracity of these propaganda outlets.

That is name calling. Unless you can tell us what disinformation they have published. Can you?

It blows my mind how many people are willing to look past the sins of one party in order to validate whatever preconceived beliefs they have.

You seem to be yourself a perfect example of this.

Elon buying twitter pulled back the curtain on government sanctioned censorship of free speech, which should be a much bigger concern than one guy censoring things he doesn’t like about private companies.

This is totally inconsistent. Twitter is a private company that can select what they want to publish and what not. That was before they were bought by Musk and after he bought it.

But you seem to consider the selection of messages pre-musk government censorship and post-musk freedom of speech.

It sounds a lot like you think “all speech should be free, but some speech should be more free than others”

lurker June 14, 2023 4:20 PM

@Bruce, All

The second paragraph predictably ruffled a few feathers. But then you go on to suggest publicly funded, publicly licensed, open access AI, and compare the current US situation to Europe and China. Surely this is more than somewhat un-American.

Ted June 14, 2023 4:39 PM

I’d like to read into this further.

Has anyone pencilled out a ‘business plan’ for a public AI? I know this probably isn’t the right terminology.

I’m wondering what the budget, objectives, and organizational structure might look like. Would the public be expecting any ‘tangible’ products or results from the funding?

Clive Robinson June 14, 2023 5:32 PM

@ lurker, Bruce, ALL,

“The second paragraph predictably ruffled a few feathers.”

Hence the many links within, but I might have used “reverse-censor” with Hellon Rusk and his “toys out the pram” behaviours re Twitter[1].

But it was the opening paragraph for me. I can not see any “AI Regulation” of worth till the US legislative and regulatory regimes work for society, not a few mental defectives with no morals or ethics.

Thus @Bruce suggesting

“publicly funded, publicly licensed, open access AI”

Can be seen as the next logical step to protect society, as society would want.

With regards comparing,

“the current US situation to Europe and China.”

Is actually reasonable as they have around twice and four times the US population, and with regards morals and ethics in effect “bracket” to the social left and political right of the US.

But for people to see,

“this is more than somewhat un-American.”

Is them in effect adopting the alleged “Ostrich Position” of head down in the sand, and their nether regions exposed for any to take advantage of as brutally as they wish…

History shows that way to many have been brought up in the US with this sort of cognative damage. Hence the expression “Sheeple to the slaughter” which implies they willingly follow a judas Goat to their doom.

I’m a little more optimistic, hence I talk of “Sleep walking into a guilded trap” and the need “to wake up before it’s too late”.

But the reality behind these “truisms” realy is the people in the US have four major failings to overcome,

1, Insular existance
2, Unrealistic life styles
3, Deficient education
4, Undemocratic government

None of the above are either new or unknown even in the US. The real question is,

“Knowing this why do citizens of the US put up with it?”

Find the answer, hopefully see the start of a solution.

[1] As for Twitters previous board, well as I’ve noted before, lets just say “crooks” rather than “criminals” as that will have to wait untill they get convicted… Something that with US legislation and Agency Regulations is less likely to happen than it would under other legislative and regulatory regimes.

LANE June 14, 2023 6:32 PM

“The United States needs a government-funded-and-directed AI program to develop widely reusable models in the public interest, guided by technical expertise housed in federal agencies.”

Oh, right — only the magic mental abstraction called “Government” has the proper noble ethics, selfless technical experts, eternal Congressional political wisdom, and unlimited funny-money to do AI correctly ??

in reality, Congress has no other source of AI expertise than the ‘private sector’ — same as those evil corporations.
For big technical projects. the Feds simply contract the work out to private companies and perhaps direct hire some private sector labor.

The ‘Government Good: Corporations BAD’ mindset is tedious ideological nonsense.

Random Nobody June 14, 2023 7:20 PM

@Winter
Don’t worry, my comment about “propaganda outlets” is very inclusive, to include FOX, The NY post, any of them. I’m not calling any person a name, or insulting anyone here, I don’t think your “name calling” point really applies. We’re all here to learn, exchange ideas, and find the facts.

My point regarding pre and post Elon is he’s one guy, with little power, it doesn’t matter what he’s censoring in particular. It all comes down to the power he or the government yields over the average citizen. In the case of Elon, it’s minimal at best whereas the government has the power to exert total control. FISA warrants, black sites, etc. The government has the means to commit serious human rights violations, such as censoring dissidents, which is why I say it should be a much bigger concern.

Follow these two thoughts to their ends, one ends with scrooge duck sitting back stroking his model rocket ships, the other ends at 1984.

I don’t associate myself with any political party, and I think I do a pretty good job on judging our politicians on their actions rather than the words that come out of their mouths. I’m not here defending anyone, just calling out violations of our rights as has been proven through the correspondence between twitter and the FBI that was released by Musk. It’s a fact, and it should scare us all.

jdgalt June 15, 2023 12:12 AM

It seems to me that all so-called tech “progress” since about 1950 has been about enabling surveillance of our private lives. From bank cards which track all our spending to cell phones that allow them to trace our every step, to OnStar in cars (already enacted as mandatory in 2025 and later models) to Alexa, Echo, Siri, and Ring devices spying on our homes, and of course the Internet of Things — if we accept and use these things, we’re living in a panopticon run by the deep state, and soon they’ll ban all forums not run by Big (Lying) Media as “disinformation”.

It’s time to say no to all of them.

Winter June 15, 2023 1:27 AM

@Random

The government has the means to commit serious human rights violations, such as censoring dissidents, which is why I say it should be a much bigger concern.

Could you explain to me why absolutely no one complaining about Freedom, FISA and censorship etc seems to be even minimally concerned about the ~2 million of people jailed in the USA and the ~4 million on probation and parole. That is, the 6 million Americans who have no personal freedom? Who would love to have a life where censorship is the main problem.

Don’t worry, my comment about “propaganda outlets” is very inclusive, to include FOX, The NY post, any of them.

But you do repeat the standard complaints about them. Where do you get these? And if you “trust no one”, where do you get your information from? And if every source is propaganda, for whom are they making propaganda? I read newspapers that are criticizing everyone, who are they making propaganda for?

You see, I am very curious, if not suspicious, about anyone who claims everyone is lying. That sounds to me like an excuse for not having to listen to other people.

Winter June 15, 2023 2:07 AM

@lurker

Surely this is more than somewhat un-American.

It is very human to be un-American.

Winter June 15, 2023 2:15 AM

@John Galt

It seems to me that all so-called tech “progress” since about 1950 has been about enabling surveillance of our private lives.

That is a very closed minded view on technological progress. We also got better and longer lives, less children dying, less hunger in the world etc.

I would say the fact that I can argue with you about technology while you probably are on the other side of the globe is already telling.

PS: Wasn’t the fictional JD Galt not responsible for the death of some quarter of Americans? and he did it on purpose. I do not really understand why Americans hail such a homicidal character as a hero.

Martin June 15, 2023 4:14 AM

While the headline is alarmist, this essay presents a structure for public policy on AI — essentially, encourage private development that goes beyond the few, huge companies that already have well-developed models.

“…public power to unleash the private sector…” which will develop solutions/responses to whatever the ‘problematic’ aspects of AI are/become.

You could also say that it’s better to go all-in in order to be on the winning team, even if you don’t necessarily like the game, its rules or its outcome, because American/Western AI leadership is better than Chinese leadership, no matter the actual result.

https://www.tabletmag.com/sections/news/articles/how-to-win-the-ai-war

Petre Peter June 15, 2023 8:55 AM

AI is our solution to big data. The problem is that our institutions are medieval and cannot understand big data without an AI. If the government is getting its recommendations from an AI then the elected officials will no longer be our representatives—the AI will. I support the need for a public AI option but the data needed to build it is in private hands. We have private companies with public data and governments with private contracts. There is no longer a clear distinction between public and private.

Security Sam June 15, 2023 12:03 PM

When it comes to rate AI
I have to be very succinct
Without any implied hint
Ain’t no room for you and I.

G June 15, 2023 3:33 PM

Dashing and daring,
Courageous and caring,
Faithful and friendly,
With stories to share.
All through the forest,
They sing out in chorus,
Marching along,
As their song fills the air.

Gummi Bears!!
Bouncing here and there and everywhere.
High adventure that’s beyond compare.
They are the Gummi Bears.

Magic and mystery,
Are part of their history,
Along with the secret,
Of gummiberry juice.
Their legend is growing,
They take pride in knowing,
They’ll fight for what’s right,
In whatever they do.

Gummi Bears!!
Bouncing here and there and everywhere.
High adventure that’s beyond compare.
They are the Gummi Bears.
They are the Gummi Bears!!

Phillip June 16, 2023 1:08 AM

All,

In defense Schneier, I more than lean left and have noticed the increase in my Twitter feed with what one might say, curated, right-wing garbage. By this I mean: as it turned out, most of the time I was not at all even taking the bait.

Best,

Phillip

DeWalt June 16, 2023 12:38 PM

@Phillip

… you present a substance-free “defense” of the original post here — which is no defense at all.

However, you do confirm that the original post is indeed a left-leaning political essay– a valid observation that somehow annoys you when made by other commenters above.

A Nonny Bunny June 16, 2023 3:17 PM

It requires upward of $100 million in computational costs for a single model training run

That was true for the first company that did it, but there have been a lot of optimizations since. I think you can slash that a factor 100. Progress is fast.

Winter June 16, 2023 6:03 PM

@DeWalt, Philip

However, you do confirm that the original post is indeed a left-leaning political essay

Which raises the question how a right-leaning essay on AI would look like?

Matt June 17, 2023 3:56 PM

Artificial intelligence will bring great benefits to all of humanity.

[citation needed]

Clive Robinson June 17, 2023 5:44 PM

@ Matt, ALL,

Re : Beyond the hype.

“Artificial intelligence will bring great benefits to all of humanity.

[citation needed]”

Not realy only common sense.

As I’ve pointed out the current ML AI is not much more than “Adaptive Digital Signal Processing”(ADSP) that is amped up beyond most peoples comprehension. Put simply it can find signals in data that humans can not visualize without it as a tool.

But this is nothing new. History shows that the invention of the graph and it’s use as a way to present information, enabled not just what we now consider very rudimentry maths but later science to move forward by pulling out signals from what was previously noise to mankind. One result was calculus, that might be the scourge of many teenage students, but it enabled much to be accomplished.

So yes ML will as a tool for signal processing bring out information above the current noise it appears to hide within.

There are however two downsides to any new tool,

1, Displacment
2, Use for bad purposes.

If we look at machines like the cotton gin, and Jaquard loom they temporarily displaced skilled workers. Their trade was automated which ment they had a choice, remain unemployed or learn new skills. The same will happen with ML. Some human occupations will be automated out of existance, and replaced with mass production and much cheaper goods and services.

The fact is many of the alledged “skilled work” that will get replaced by ML is kind of “makework” anyway.

One of the things that many workers do not realise is that their jobs are actually of no real worth, they are going through the motions and it is “makework” it’s about 1/3rd of the adult population getting paid to do nothing of any real worth. Thus some may view the loss of such jobs as potentially a societal benifit.

The real issue of concern though is that some people have agendas that are anti-society in many ways. They will grasp at anything that enables then to move their anti-society agendas forward.

This is the real danger of ML and a way for those with anti-society agendas to push them forward but at more arms length.

We’ve already seen ML being used to implement racism, sexism and other discriminatory “isms” with the excuse of “the computer says” as a way to protect themselves.

This is the same as mad people and tyrants pretending they are being “directed by god” thus are not to blaim for the harms they so wish to commit against society as we know it.

Some unfortunately will see further advantage in giving such systems “physical agency via weapons” and people will get seriously hurt. This will get coverd up / excused as “collateral damage, in the fog of war” rather than what it actually is, targets selected for elimination by people with agendas of harm.

So yes ML is here to stay, and initially it will cause harm due to human agendas. However history shows that such a period will give way to societal benifit if people become educated about the technology.

Phillip June 18, 2023 3:34 PM

@Winter, @DeWalt

Some truth in what you say. I might believe my own trolling is wittier than most. However, if we are to evolve the Internet, we should perform less with carping from the sidelines. If you were to look up my recent postings, I have endeavored to post something of actually original content. Content requiring effort. Note: I am not using this forum to plug this work. My real name is Phillip for purposes of this forum only.

I am not asking Twitter to put me in a plastic bubble. I was only defending the claim by suggesting some of the auto-suggestions from Twitter were really amiss, based on my own history of proving.

By the way, I only now realize there is a subscribe feature here. I do enjoy this crowd. So, thank you very much.

Phillip June 18, 2023 3:56 PM

@Winter, @DeWalt

Oh. One other thing: “I am interested in cybersecurity, among other things.”

Sorry, Bruce; I might be violating policy by quoting me. The change laying around is not helping me with purchasing swag.

Clive Robinson June 18, 2023 8:19 PM

@ Matt, ALL,

Re : Beyond the hype.

“Artificial intelligence will bring great benefits to all of humanity.

[citation needed]”

Not exactly a citation, but there is one in the article. It’s an overview of a demonstration of what can be done with an LLM that is productive in ways few will realise,

https://www.tomshardware.com/news/conversation-with-chatgpt-was-enough-to-develop-part-of-a-cpu

In short the equivalent of read ChatGPT a specification, and the ML system does the rest of the work flow to get functional hardware for a CPU without those involved having to know the VHDL or Verilog “Hardware Descriptor Language”(HDL) or other “assist tools”[1].

Is this going to make Design Specialists,

1, Unemployed
2, More Productive

Whilst the first will happen initially the second will be more likely in even the very short term. It takes around 20years to get the real level of experience needed, and much of that is “the drudge of learning the HDL” to get the best from them. And that is “formulaic information” you effectively “learn by rote”. Which is not a productive use of an engineer or their time.

Thus the desperate need for people to be able to produce “algorithms in silico” for 200,000 or more cell FPGA’s as specialised add ons for CPU’s to make algorithms run 5-50 times as fast for fractions of the power is going to quickly expand the demand for engineers way beyond supply even with AI LLM assistants.

[1] I’ve used many hardware “logic design” tools in my time from the early PALsims upwards. However I always ended up “hand tweeking” as I had a jaundiced / experienced eye for how to squeeze things. But I was a rareity and that was becoming a more accute problem a couple of decades back. And has persisted even when “Assitant Tools” started appearing a half decade or so later. But at around $100,000 few ever got to see let alone use such tools and their output was almost always “proprietary / secret”. However about a decade ago the idea of doing maths algorithms in hardware rather than software on general CPU’s started taking off. Even though there were FPGA’s to act as an intermediate way to do this they were still way beyond many even highly skilled people to be able to use (most “software engineers” can not get their heads sufficiently around parallelism). Thus the idea of assistants came about and it was not long before people were asking for their own tools as this 2014 article indicates,

https://community.element14.com/technologies/fpga-group/b/blog/posts/alternatives-to-vhdl-verilog-for-hardware-design

Whilst Open Source and non C/C++ languages are more numerous than they were, the shortage of engineers is at best acute and holding much back.

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.