Purdue President Chiang to grads: Let Boilermakers lead in ‘sharpening the ability to doubt, debate and dissent’ in world of AI
Purdue President Mung Chiang made these remarks during the university’s Spring Commencement ceremonies May 12-14.
Opening
Today is not just any graduation but the commencement at a special place called Purdue, with a history that is rich and distinct and an accelerating momentum of excellence at scale. There is nothing more exciting than to see thousands of Boilermakers celebrate a milestone in your lives with those who have supported you. And this commencement has a special meaning to me as my first in the new role serving our university.
President Emeritus Mitch Daniels gave 10 commencement speeches, each an original treatise, throughout the Daniels Decade. I was tempted to simply ask generative AI engines to write this one for me. But I thought it’d be more fun to say a few thematic words by a human for fellow humans before that becomes unfashionable.
AI at Purdue
Sometime back in the mid-20th century, AI was a hot topic for a while. Now it is again; so “hot” that no computation is too basic to self-anoint as AI and no challenge seems too grand to be out of its reach. But the more you know how tools such as machine learning work, the less mysterious they become.
For the moment, let’s assume that AI will finally be transformational to every industry and to everyone: changing how we live, shaping what we believe in, displacing jobs. And disrupting education.
Well, after IBM’s Deep Blue beat the world champion, we still play chess. After calculators, children are still taught how to add numbers. Human beings learn and do things not just as survival skills, but also for fun, or as a training of our mind.
That doesn’t mean we don’t adapt. Once calculators became prevalent, elementary schools pivoted to translating real-world problems into math formulations rather than training for the speed of adding numbers. Once online search became widely available, colleges taught students how to properly cite online sources.
Some have explored banning AI in education. That would be hard to enforce; it’s also unhealthy as students need to function in an AI-infused workplace upon graduation. We would rather Purdue evolve teaching AI and teaching with AI.
That’s why Purdue offers multiple major and minor degrees, fellowships and scholarships in AI and in its applications. Some will be offered as affordable online credentials, so please consider coming back to get another Purdue degree and enjoy more final exams!
And that’s why Purdue will explore the best way to use AI in serving our students: to streamline processes and enhance efficiency so that individualized experiences can be offered at scale in West Lafayette. Machines free up human time so that we can do less and watch Netflix on a couch, or we can do more and create more with the time saved.
Pausing AI research is even less practical, not the least because AI is not a well-defined, clearly demarcated area in isolation. All universities and companies around the world would have to stop any research that involves math. My Ph.D. co-advisor, Professor Tom Cover, did groundbreaking work in the 1960s on neural networks and statistics, not realizing those would later become useful in what others call AI. We would rather Purdue advance AI research with nuanced appreciation of the pitfalls, limitations and unintended consequences in its deployment.
That’s why Purdue just launched the universitywide Institute of Physical AI. Our faculty are the leaders at the intersection of virtual and physical, where the bytes of AI meet the atoms of what we grow, make and move – from agriculture tech to personalized health care. Some of Purdue’s experts develop AI to check and contain AI through privacy-preserving cybersecurity and fake video detection.
Limitations and Limits
As it stands today, AI is good at following rules, not breaking rules; reinforcing patterns, not creating patterns; mimicking what’s given, not imagining beyond their combinations. Even individualization algorithms, ironically, work by first grouping many individuals into a small number of “similarity classes.”
At least for now, the more we advance artificial intelligence, the more we marvel at human intelligence. Deep Blue vs. Kasparov, or AlphaGo vs. Lee, were not fair comparisons: the machines used four orders of magnitude more energy per second! Both the biological mechanisms that generate energy from food and the amount of work we do per Joule must be astounding to machines’ envy. Can AI be as energy efficient as it is fast? Can it take in energy sources other than electricity? When someday it does, and when combined with sensors and robotics that touch the physical world, you’d have to wonder about the fundamental differences between humans and machines.
Can AI, one day, make AI? And stop AI?
Can AI laugh, cry and dream? Can it contain multitudes and contradictions like Walt Whitman?
Will AI be aware of itself, and will it have a soul, however “awareness” and “souls” are defined? Will it also be T.S. Eliot’s “infinitely suffering things”?
Where does an AI life start and stop anyway? What constitutes the identity of one AI, and how can it live without having to die? Indeed, if the memory and logic chips sustain and merge, is AI all collectively one life? And if AI duplicates a human’s mind and memory, is that human life going to stay on forever, too?
These questions will stay hypothetical until breakthroughs more architectural than just compounding silicon chips speed and exploding data to black-box algorithms.
However, if given sufficient time and as a matter of time, some of these questions are bound to eventually become real, what then is uniquely human? What would still be artificial about artificial intelligence? Some of that eventuality might, with bumps and twists, show up faster than we had thought. Perhaps in your generation!
Freedoms and Rights
If Boilermakers must face these questions, perhaps it does less harm to consider “off switches” controlled by individual citizens than a ban by some bureaucracy. May the medicine be no worse than the disease, and regulations by government agencies not be granular or static, for governments don’t have a track record of understanding fast-changing technologies, let alone micromanaging them. Some might even argue that government access to data and arbitration of algorithms counts among the most worrisome uses of AI.
What we need are basic guardrails of accountability, in data usage compensation, intellectual property rights and legal liability.
We need skepticism in scrutinizing the dependence of AI engines’ output on their input. Data tends to feed on itself, and machines often give humans what we want to see.
We need to preserve dissent even when it’s inconvenient, and avoid philosopher kings dressed in AI even when the alternative appears inefficient.
We need entrepreneurs in free markets to invent competing AI systems and independently maximize choices outside the big tech oligopoly. Some of them will invent ways to break big data.
Where, when and how is data collected, stored and used? Like many technologies, AI is born neutral but suffers the natural tendency of being abused, especially in the name of the “collective good.” Today’s most urgent and gravest nightmare of AI is its abuse by authoritarian regimes to irreversibly lock in the Orwellian “1984”: the surveillance state oppressing rights, aided and abetted by AI three-quarters of a century after that bleak prophecy.
We need verifiable principles of individual rights, reflecting the Constitution of our country, in the age of data and machines around the globe. For example, MOTA:
- M for Minimalism: only the minimal action on data for the specified purpose.
- O for Optionality: to the maximum degree possible, each person can choose to opt out.
- T for Transparency: in all cases, individuals should be informed.
- A for Appeal: a person can litigate companies and the government when the above rights are violated, in an independent judicial system under the rule of law.
My worst fear about AI is that it shrinks individual freedom. Our best hope for AI is that it advances individual freedom. That it presents more options, not more homogeneity. That the freedom to choose and free will still prevail.
Let us preserve the rights that survived other alarming headlines in centuries past.
Let our students sharpen the ability to doubt, debate and dissent.
Let a university, like Purdue, present the vista of intellectual conflicts and the toil of critical thinking.
Closing
Now, about asking AI engines to write this speech. We did ask it to “write a commencement speech for the president of Purdue University on the topic of AI,” after I finished drafting my own.
I’m probably not intelligent enough or didn’t trust the circular clichés on the web, but what I wrote had almost no overlap with what AI did. I might be biased, but the AI version reads like a B- high school essay, a grammatically correct synthesis with little specificity, originality or humor. It’s so toxically generic that even adding a human in the loop to build on it proved futile. It’s so boring that you would have fallen asleep even faster than you just did. By the way, you can wake up now: I’m wrapping up at last.
Maybe most commencement speeches and strategic plans sound about the same: Universities have made it too easy for language models! Maybe AI can remind us to try and be a little less boring in what we say and how we think. Maybe bots can murmur: “Don’t you ChatGPT me” whenever we’re just echoing in an ever smaller and louder echo chamber down to the templated syntax and tired words. Smarter AI might lead to more interesting humans.
Well, there were a few words of overlap between my draft and AI’s. So, here’s from both – some bytes “living” in a chip and a human Boilermaker – to you all on this 2023 Purdue Spring Commencement: “Congratulations, and Boiler Up!”