• Visit
  • Apply
  • Give

School of Education

Blue Hens explore what it means to be human in the age of AI

The AI genie is out of the bottle.

In 2024, artificial intelligence has infiltrated every sector of our lives, from healthcare to politics to dog walking (yes, you really can enlist a robot for that).

But for all its ubiquity, AI remains a question mark in the collective consciousness. Are we optimistic… or afraid? Thinking about this new era feels like watching a Blue Hen playoff game: You’re excited but also on the edge of your seat with nervous energy.

For millennia, human beings have sought to explain the world and its phenomena, asking big questions about our place in the cosmic order: Why are we here? Are we alone in the universe? Could we live forever?

The good news is that machines are set to unlock answers to some of these previously unanswerable questions. The bad news: Machines are set to unlock answers to some of these previously unanswerable questions.

Herein lies perhaps the greatest catch-22 of the 21st century. As a species hardwired for curiosity and critical thought, it would be practically anti-human not to dive headfirst into the deep end of the AI pool. Yet—if not managed carefully—AI threatens to erode those very characteristics that distinguish human nature.

At UD, Blue Hens are leveraging AI to tackle everything from physical pain to environmental peril, while creating space for all thoughts (critical and evangelical) on this emerging topic.

Institutes of higher education have a responsibility to grapple with this dilemma and ensure we get this right. Driving efforts to keep AI human led, universities are working to guarantee the technology serves—rather than subverts—the species.

At UD, Blue Hens are leveraging AI to tackle everything from physical pain to environmental peril, while creating space for all thoughts (critical and evangelical) on this emerging topic. To this end, UD Magazine has asked some of our experts to weigh in on the million-gigabyte question:

In the age of AI, what does it mean to be human?

Federica Bianco quote
UD Astrophysics Professor Federica Bianco

Making AI compute

Wrapping your brain (or, for the cyborgs amongst us, central processing unit) around the implications of artificial intelligence requires first understanding the amorphous technology—a tall order, according to recent headlines. “AI is not what you think,” maintains The New York Times. “Even the scientists who build AI can’t tell you how it works,” claims VICE.

“I have a hard time explaining it, too, because AI is really an umbrella term for a constellation of complex tools,” says Kathy McCoy, professor of computer and information sciences and co-director of UD’s AI Center of Excellence, which connects pioneering researchers and helps fund promising initiatives. “But at their core, these projects all involve the same thing: computational systems that exhibit human intelligence.”

Put another way, AI machines are solving problems and making high-level decisions—cognition once exclusively the domain of Homo sapiens. And they are doing it at lightning speed. While the technology relies on a number of mind-bending subfields (Machine learning! Computer vision! Neural networks!), the concept is relatively simple: Pair data with the right algorithms (instructions for finding and analyzing patterns within said data), and voila! Machine-generated intelligence.

The science has already crept into your life. It’s how your phone’s facial recognition technology works, and how Netflix curates recommendations. At UD, chatbots in admissions and the library field rote questions, leaving staff members to tackle more complex, student-specific matters.

 

AI machines are solving problems and making high-level decisions—cognition once exclusively the domain of Homo sapiens. And they are doing it at lightning speed.

The concept is not new—the phrase “AI” was coined at a niche Dartmouth College workshop in the 1950s. But modern supercomputing technology has exponentially increased the amount of data at AI’s disposal. At the same time, ChatGPT—that revolutionary tool capable of composing everything from book reports to actual books—has elevated AI from the lab to the public domain. Where TikTok took nine months to reach 100 million users and Instagram two-and-a-half years, ChatGPT met this same milestone in roughly 60 days, becoming the fastest-growing, technology-driven consumer application in history.

In many ways, this new dawn is irresistible. UD astrophysics professor Federica Bianco serves on a NASA-appointed panel and calls the technology critical to unlocking the mysteries of the universe, potentially revealing new laws of physics to explain the formation of life on Earth.

“It’s not a question of whether AI is beneficial,” she says. “With increasingly large datasets from astrophysical probes, we have a better chance than ever to understand our place in the cosmos—but AI is necessary to process these data.”

Or consider implications for human well-being. Last year,  Sandra Chandrasekaran, associate professor of computer and information sciences and co-director of UD’s AI Center of Excellence, used AI to tailor genetic-profile-specific drug therapies for pediatric cancer patients—the type of precision medicine that may soon become the norm.

Mark Greene quote
UD Philosophy Professor Mark Greene

But as these developments point to an almost miraculous future, practical concerns arise: Will I lose my job? (It’s possible. According to one sobering report from investment bank Goldman Sachs, AI is set to replace 300 million full-time positions over the next 10 years.) Or: Will machines rise up and enslave humanity? (Also possible. In a 2022 survey of 700 AI researchers, more than half reported at least a 10% chance that humans are annihilated—or at least severely disempowered—by this technology.)

Perhaps even more unsettling than unemployment or robot overlords is the existential threat of trudging through life stripped of creative or independent thought. Ask ChatGPT whether this fear is valid, and the bot offers reassurance: “AI itself will not diminish humanity.” Then again, when asked if its own response can be trusted, it deflects: “Use your judgment.”

Humans, it appears, will need to figure this one out themselves.

Keep calm and code on

Some Blue Hens think the metaphysical angst—like an android Arnold Schwarzenegger terminating the planet—is a tad dramatic.

Take philosophy professor Mark Greene. He describes AI alarmism as “old fartism,” likening it to outcries over inventions like the calculator or Spellcheck. (The former sparked a 1986 protest of 6,000 sign-wielding math teachers in Washington, D.C.: “The button’s nothin’ ‘till the brain is trained!”)

“Old fartism is a bit of a misnomer because there are plenty of young farts, too,” says Greene, who uses AI to organize teaching databases. “It’s a term for anyone with a ‘back-in-my-day’ attitude. The truth is, people adapt. The world hasn’t come to an end yet.”

Indeed, there are reasons to believe AI may enhance innate humanity—namely, by unleashing imagination. Artistic expression has typically been the realm of quote-unquote artists, but AI is changing the game. Consider Harry Wang, a professor of information management systems. He’s empowering business students with various applications for composing music: lyrics, audio, even album cover art. Knowing how to use such multimodal generative AI tools will help Blue Hens in their future careers, but the benefits go beyond professional success.

“To be a good businessperson—or even just an interesting person—you need to be highly interdisciplinary,” Wang says. “Creativity doesn’t need to be a complex thing reserved for experts.”

Engineering and Computer Sciences Professor Austin Brockmeier
Engineering and Computer Sciences Professor Austin Brockmeier

Other AI projects on campus are set to relieve individuals of burdens, allowing greater bandwidth for creative endeavors or nurturing relationships central to the human condition. When these burdens are financial, researchers at UD’s new Fintech Innovation Hub leverage AI to help families build wealth—and that basic tenet of human existence, dignity. Other times, the burdens relate to physical health, and that’s when UD researchers like Austin Brockmeier, assistant professor of electrical and computer engineering and computer and information sciences, step in. He leads a group using AI to analyze brain data. One project hopes to help seizure sufferers predict upcoming episodes so they don’t have to abandon activities like driving or swimming.

“People love to talk about this technology in terms of science fiction, but it’s not magical or futuristic,” he says. “It’s just that, in their complexity, AI tools sometimes create beautiful things that surprise us.”

At the very least, this technological revolution presents an opportunity to take the metaphoric Matrix red pill and confront the meaning of existence.

“This moment forces us to reflect on age-old questions: What does it mean to be excellent as a human being?” says Tom Powers, director of UD’s Center for Science, Ethics and Public Policy and an associate professor of philosophy.“And exactly what kind of life do you want to live?”

Rage against the machine

For some, the answer is simple: A life without AI.

Alan Fox, professor of world religions and philosophy at UD, acknowledges the potential value of the aforementioned developments but bemoans an erosion of intellectual autonomy.

“If we lose the ability to think for ourselves—and that’s where this seems to be going—what are we even here for?” asks Fox, who has banned the use of AI in his classrooms. “I can’t imagine a more boring, existentially irresponsible existence. We might as well give up and let the cockroaches take over.”

 

What does it mean to be excellent as a human being? And exactly what kind of life do you want to live?” -Tom Power

For Herbert Tanner, director of UD’s Center for Robotic and Autonomous Systems, the threat looms large. Even as he partners with colleagues on projects that incorporate AI, with applications for everything from homeland security to infant development, he cautions that overreliance on the tech will ultimately stymie independent, critical thinking.

“I see AI as a pendulum that right now is swinging one way—it’s likely to hit a wall and swing back,” says Tanner. “As with any powerful new technology, all depends on how one uses it, and reckless use of these tools without appropriate safeguards may have catastrophic consequences.”

Compounding the issue is overwhelming evidence that AI algorithms propagate human biases, having been trained on biased or incomplete data. Tools developed by some of the world’s brightest minds underestimate the medical needs of Black patients and undervalue female job applicants based on gender.

In response, researchers like Cathy Wu, director of UD’s Data Science Institute, are working to ensure AI serves society’s most disadvantaged groups. An early developer of the neural network methodology upon which many AI developments rely, Wu’s latest project involves leveraging AI to improve health outcomes for U.S. veterans. “It’s imperative we not let this technology further widen the gap between the haves and the have-nots,” she says. “I want to make sure artificial intelligence is democratized.”

Miguel Garcia-Diaz, vice president for research, scholarship and innovation
Miguel Garcia-Diaz, vice president for research, scholarship and innovation

The road to a safe and equitable AI future is fraught, and the terrain is constantly evolving. Attempts to navigate this inflection point have landed most Blue Hens in technological limbo, not firmly planted in either pro- or anti-AI camps, but somewhere in the middle of an uncharted spectrum.

This is the case at least for Christopher Rasmussen, associate professor of computer and information sciences. He likens the development of AI to that of the atomic bomb—equal parts amazing and terrifying. “Should we have gone down this road at all? Impossible to say.”

This year, he and Michele Lobo, associate professor of physical therapy, are leveraging the technology for the automatic assessment of motor development in infants.

“Sure, I feel torn about the ramifications of artificial intelligence,” says Rasmussen, who teaches a graduate seminar on AI ethics. “It’s not black and white for me, and these issues govern the types of projects I choose.” (On his no-go list are military applications like autonomous weapons capable of making wartime decisions without the variable of human empathy.)

“What the net outcome is for the human race remains to be seen,” he adds. “In the meantime, it’s important for people to keep having the discussion, to keep shaping AI’s direction. That’s all we can do.”

Education in the age of AI

When it comes to spearheading dialogue and, ultimately, striking the right balance between man and machine, institutes of higher education are poised to lead. If society hopes to establish, as the Biden administration calls for, an AI “containment strategy”—a collective agreement about what and how much autonomy humans are comfortable sacrificing to machines—government and industry leaders will need input from every field, physics to philosophy.

But before academia can help establish these guardrails, higher education must turn inwards. At a time when machines are increasingly providing the answers, what content is worth teaching?  If students use AI to pen the first draft of a paper, are they being resourceful—or plagiaristic?

Cathy Wu, director of UD's Data Science Institute
Cathy Wu, director of UD’s Data Science Institute

And how does using AI to mine data for a scientific discovery challenge longstanding definitions of research integrity?

“There is no consensus yet on the rules that ought to constrain and guide AI, and because the AI space is evolving so quickly, trying to develop the rules is a little like stabbing Jello-O with a fork,” says Powers, the ethics expert. “But at UD, people have really rolled up their sleeves, diving in headfirst to sort this out for students and faculty, and to guide the use of AI in education.”

In an effort spearheaded by the nonprofit Ithaka S+R, the University is one of 18 institutional cohorts across North America contributing to a national report on the state and future of AI in higher education. UD has already developed programs in artificial intelligence—in the spring, the University became among the first in the nation to offer a graduate certificate in generative AI. And, in the summer of 2023, the interdisciplinary AI for Teaching and Learning Working Group comprising faculty, staff and students convened to examine the AI landscape and provide guidance for campus departments. According to members, educators are on the cusp of a new era—one that, if handled correctly, promises a revitalization of the field.

Vatsal Sonecha, EG91M
Vatsal Sonecha, EG91M

“If a student’s goal is simply to graduate and move on, then yes, AI can undercut learning,” says Joshua Wilson, associate professor in the School of Education. “But if the goal is to actually learn and grow, then AI becomes a tool to help students find meaning and reward in the learning process itself. If we can focus on that piece, AI isn’t a threat.”

Gone are the days of rote memorization and textbook cramming. Now is the time for critical thinking, ethical decision making and the development of EQ (emotional intelligence) over IQ.

“To think and read deeply, sympathetically and imaginatively and to express oneself creatively, these are the skills that the machines will never replace,” says Matt Kinservik, professor of English. “We’re not just preparing a future workforce; we’re preparing the stewards of our democracy.”

In one of his undergraduate classes, Kinservik has students analyze dystopian literature, including the classic Brave New World. Typically, the curriculum involves essay writing, but AI—imminently capable of completing this homework assignment on behalf of an English major—has changed the game. And for this, Kinservik says: “I feel liberated. There’s nothing particularly magical about the essay as an assessment tool.” Last semester, he leaned into the capabilities of ChatGPT, enlisting it to compose a series of college-level essays, each arguing that a different character is the hero of the novel. Then, his human students critiqued these papers and penned their own arguments against them—exactly the type of machine-output assessment society needs.

 

We’re not just preparing a future workforce; we’re preparing the stewards of our democracy.” -Matt Kinservi

“I believe all of the hype surrounding AI,” Kinservik says. “But I don’t think we should be afraid of this technology—and not just because resistance is futile. This is an exciting moment for education.”

Moving beyond fear

Trepidation—how much is valid?—is a recurring theme in conversations about AI. And UD alumnus Vatsal Sonecha, EG91M, who has worked at the intersection of AI and cybersecurity for both private and Fortune 500 companies, understands the impulse—he’s more familiar than most with risks posed by AI in the hands of bad actors. But as he sees it: “You can get motivated by fear, or opportunity. We aren’t putting the genie back in the bottle, so let’s figure out how to harness this technology.”

As an adviser to the College of Engineering, Sonecha believes UD students have “moved past fear to understanding the essential controls for safety.” Instead, “They are already working on some really innovative applications of AI… and the outcomes will be amazing.”

The rest of us may have some catching up to do. But in all the time spent discerning whether this technology will ultimately help or harm us, perhaps we’re missing how accurately it reflects us. AI contains multitudes. It is neither entirely good nor entirely evil. It is simultaneously powerful and inept, capable of making the world better or dismantling it entirely. Like every member of the human race, it is a brilliant, biased, breakable work in progress.

Amidst all the uncertainty, one thing is irrefutable: The inescapable AI is making us feel. Something that, as of yet, no robot can replicate.

Read this story on UDaily.

Article by Diane Stopyra.