EC: What are you?

U-M GPT: I am an AI language model developed by OpenAI, and my purpose is to assist you with any questions or information you need. I’m here to help students, staff, and anyone affiliated with the University of Michigan by providing support, answering queries, and offering guidance on a wide range of topics. How can I assist you today? 

 

In December 2022, OpenAI—an artificial intelligence research organization—released a free preview of its new chatbot, ChatGPT, and the app very quickly became a main talking point in global news networks.

Less than one year later, in August 2023, the University of Michigan launched its own series of generative AI tools available to all students, faculty, and staff: U-M GPT, U-M Maizey, Gemini, ChatSonic, DALL-E, and more.

“A lot of people on campus saw that this was going to be a transformative technology,” said Don Lambert, director of IT infrastructure at Michigan Engineering. “We could either wait and see what happens with this, or we could go ahead and lead, and we chose to lead.”

The Michigan Daily reported that U-M was “the first major university to develop its own generative AI,” and President Santa J. Ono posted on X that this suite of generative artificial intelligence (GenAI) would provide users with “tools that firmly emphasize the importance of equity, accessibility, and privacy.”

Despite the university’s push, many faculty, staff, and students continue to have concerns about what embracing this technology could mean. Where does GenAI source its information? What do we do with AI-generated art? Most importantly—how can this technology be used and misused?

Generative AI distinguishes itself from other forms of AI by its ability to create content. “It can be text, videos, sounds, images, code,” explained Lambert. “A lot of times when talking about GenAI we’re talking about large language models, and in the end it’s really about understanding language.”

In STEM fields, AI has been a helpful assistant for decades.

At U-M, an IBM 650 computer system was installed in September 1956, and the Artificial Intelligence Lab can trace its roots back to a Logic of Computers Group formed in 1970. U-M was a pioneer in the field back then, as it continues to be now.

Just this year, researchers at the lab have been testing the use of AI to improve video-based surgical learning, decode dog vocalizations, and detect PFAS chemical contamination.

The applicability of AI in humanities research might be less apparent—especially in a field like history—but these new language models are more capable of accurately reading and understanding texts than any of their predecessors.

By training GenAI on specific data sets, historians at the University of Venice have created a neural network that “can recreate missing portions of inscriptions and attribute dates and locations to the texts,” reported Moira Donovan in the MIT Technology Review.

Local historians have also been getting creative with GenAI’s ability to read digitized materials and create maps of streets and neighborhoods from specific time periods.

As exciting as these possibilities may be, there are still limitations in what the AI knows. OpenAI uses web scraping—extracting data from online sources—as well as licensing agreements with content providers. But the process of gathering that information and presenting it to users is murky and buried in code.

This is particularly concerning on college campuses where students are challenged every day to create intellectual content— and where academic integrity is taken very seriously. And it’s a challenge for disciplines that rely heavily on reading and writing, like literature, philosophy, religion, and history.

When it comes to teaching, perhaps the biggest question facing history faculty is how to manage students’ engagement with AI in their writing assignments.

Professor Henry Cowles workshopped an essay by U-M GPT in one of his history courses. At the start of class, he asked the AI to write an essay responding to the same prompt his students received (comparing Charlotte Gilman’s 1894 short story “The Yellow Wallpaper” to a more recent source).

“The first attempt would have received an F because it didn’t cite any passages,” Cowles said. “But when we clarified that requirement, and then clarified again when the wrong recent source was cited, the AI produced a paper that the students and I agreed deserved at least a C.”

The temptation to use GenAI as a shortcut might be too great for some students.

“What I see front and center is the illusion of a writing machine that promises to help students meet their own goals,” said Professor Victoria Langland, “a kind of twenty-first century ‘hack’ that would simply assist busy students in getting to where they want to go more efficiently.”

While the potential for this kind of abuse is very real, what’s most disturbing to many faculty is the mere fact that writing is an essential part of building historical knowledge and understanding.

“For historians, writing is a means to an end, a tool for helping us to think, to analyze, to weigh different kinds of evidence,” explained Langland. “It is not the end or goal itself. If we don’t write and revise, we don’t really sort through what we think about our evidence.”

Finding evidence and building arguments are also vital to the work of history students.

“When I think about how to engage with AI, I go back to the fundamentals of what I’m really trying to accomplish,” said Professor Farina Mir, “and this is what’s in my syllabus: help students develop skills from history and the humanities, including critical reading, engaged analysis, argumentation, historical contextualization, and the ability to present arguments verbally and in writing.”

Is it possible for AI to support these efforts? U-M GPT says yes:    

I can assist you with your history studies in several ways:

Providing Information: I can supply detailed information on specific historical events, figures, and time periods.

Clarifying Concepts: I can help explain complex historical concepts, causes, and effects.

Study Resources: I can suggest textbooks, academic papers, and online resources.

Practice Questions: I can generate practice questions and quizzes to test your knowledge.

Essay Writing: I can guide you with essay structure, thesis development, and provide feedback on your drafts.

Timelines: I can help you create timelines of historical events to better understand the chronology and relationships between events.

Discussion: I can engage in discussions about historical interpretations and historiography.

Review Sessions: I can help review your notes and discuss major themes to prepare for exams.

But it does require instructors to change the way they manage their classes.

Pragya Kaul, a graduate student instructor, made significant revisions to her syllabus last year. “I had to locate, modify, or draft new policies for my classroom on using AI and the range to which it is acceptable,” she said.

“Instructors have new decisions to make: is it acceptable that AI tech is used to edit and check for grammar, or should it be banned wholesale?” asked Kaul.

Undergraduate students themselves also carry concerns.

“I try to avoid AI as much as possible, even though I know many of my classmates have taken advantage of the new technology,” said recent U-M History graduate McKenzie Liberi (BA 2024).

“I personally do not think that my education would benefit from using AI, and I view it as a new form of plagiarism. Many of my professors have this same opinion, and I also do not want to be accused of academic misconduct.”

Sanford Rossen's design for a robot professor, December 1950. Rossen (1927-1984) earned a Bachelor of Architecture degree at U-M in 1951; his designs include outdoor music venues like Pine Knob in Clarkston, Michigan. (Bentley Historical Library, University of Michigan)

The College of LSA has provided some guidance on how to manage AI use in the classroom, including sample syllabus statements and prompt literacy resources—which help users learn how to interact with AI more effectively.

“LSA instructors are free to either experiment with or exclude GenAI tools from their classes,” said Emily Ravenwood, manager of the Learning and Teaching Consultants at LSA Technology Services, “and if there are more specific guidelines per field, it’s the department rather than the college who needs to set those.”

Whether or not U-M History writes a formal policy, the community—faculty, students, and staff alike—continues to seek guidance and understanding. Many are finding support and generating ideas from one another, and are staying focused on the essential goals of the history classroom.

In the January 2024 department meeting, U-M History’s interim chair, Joshua Cole, included AI as an agenda item, and in August 2024 the department held a workshop to continue the conversation.

GenAI has only just arrived, and it’s not going away any time soon. So, what is a historian to do?

“Use blue books,” laughed Mir, referring to the physical paper booklets that college instructors have used for decades to facilitate in-person writing.

An essay in the Michigan Daily from 1973 by Robert Barkin called “Computer Madness” envisioned a future where computers are in charge of scheduling classes:

Jones: For the last time, I want to drop Music in Technology. How—

Computer: Why do you want to drop it?

Jones: It’s bogue, man.

Computer: That does not compute.

Jones: It’s dull.

Computer: Continue.

Jones: For God’s sake, tell me how to get out of it!

Computer: Section number.

Jones: 003.

Computer: You cannot drop that class.

Jones: Why the hell not?

Computer: Drop/adds deadline passed weeks ago.

Barkin couldn’t have known that in 2024 U-M would launch U-M Maizey, an AI tool that is now helping students with their scheduling—but with a little less attitude.

There may yet come a time when AI becomes an essential partner for historians. But it’s difficult to imagine an AI becoming intoxicated by the smell of parchment in the archive or reading a collection of letters against the grain. For now, only a human fully can understand the complicated, contradictory, and seemingly irrational decisions that only humans can make—decisions that sometimes go on to shape the course of history.

 

Originally published in the 2024 edition of the History Matters magazine.