成人抖阴

成人抖阴

As OpenAI鈥檚 ChatGPT Scores a C+ at Law School, Educators Wonder What鈥檚 Next

Grappling with fallout from the groundbreaking bot, teachers from elementary school to the Ivy League push to keep thinking at the center of education

By Greg Toppo | February 7, 2023
Eamonn Fitzmaurice/成人抖阴

Though computer scientists have been using chatbots to for more than 70 years, 2023 is fast becoming the year in which educators are realizing what artificial intelligence means for their work.

Over the past several weeks, they鈥檝e been putting 鈥檚 through its paces on any number of professional-grade exams in law, medicine, and business, among others. The moves seem a natural development just weeks after the groundbreaking, free (for now) chatbot appeared. Now that nearly anyone can play with it, they鈥檙e testing how it performs in the real world 鈥 and figuring out what that might mean for both teaching skills like writing and critical thinking in K-12, and training young white-collar professionals at the college level. 

Most recently, at the University of Minnesota Law School tested it on 95 multiple choice and 12 essay questions from four courses. It passed, though not exactly at the top of its class. The chatbot scraped by with a 鈥渓ow but passing grade鈥 in all four courses, a C+ student.

But don鈥檛 get complacent, warned Daniel Schwarcz, a UM professor and one of the study鈥檚 authors. The AI earned that C+ 鈥渞elative to incredibly motivated, incredibly talented students 鈥 and it was holding its own.鈥

Think of it this way, Schwarcz said: Plenty of C+ students at the university go on to graduate and pass the bar exam.

Daniel Schwarcz

ChatGPT debuted less than three months ago, and its respectable performance on several of these tests is forcing educators to quickly rethink how they evaluate students 鈥 assigning generic written essays, for instance, now seems like an invitation for fraud. 

But it鈥檚 also, at a more basic level, forcing educators to reconsider how to help students see the value of learning to think through the material for themselves. 

Before he encountered ChatGPT, Schwarcz typically gave open-book exams. What the new technology is making him think more deeply about is whether he was often testing memorization, not thinking. 鈥淚f that’s the case, I’ve written a bad exam,鈥 he said.

And like Schwarcz, many educators now warn: With improving technology, today鈥檚 middling chatbot is tomorrow鈥檚 valedictorian.

鈥淚f this kind of tool is producing a C+ answer in early 2023,鈥 said Andrew M. Perlman, dean of Suffolk Law School in Boston, 鈥渨hat’s it going to be able to do in 2026?鈥

Fake studies and 鈥榟uman error鈥

Lawyers aren鈥檛 the only professionals in the chatbot鈥檚 crosshairs: In January, Christian Terwiesch, a business professor at the University of Pennsylvania鈥檚 Wharton School, let it loose on the final exam of Operations Management, a 鈥渢ypical MBA core course鈥 at the nation鈥檚 pre-eminent business school. 

While the AI made several 鈥渟urprising鈥 math mistakes, Terwiesch wrote in the, it impressed him with its ability to analyze case studies, among other tasks. 鈥淣ot only are the answers correct, but the explanations are excellent,鈥 he wrote.

Its final grade: B to B-.

A Wharton colleague, Ethan Mollick, in December that he got the chatbot to write a syllabus for a new course, as well as part of a lecture. And it generated a final assignment with a grading rubric. But its tendency to occasionally deliver erroneous answers from its wide-ranging web searches, Mollick said, makes it more like an 鈥渙mniscient, eager-to-please intern who sometimes lies to you.鈥

Indeed, AI tools often create problems of their own. In January, Jeremy Faust, an emergency medicine physician at Brigham and Women鈥檚 Hospital in Boston, asked ChatGPT to a 35-year-old woman with chest pains. The patient, he specified, takes birth control pills but has no past medical history.

After a few rounds of back-and-forth, the bot, which Faust cheekily referred to as 鈥淒r. OpenAI,鈥 said she was probably suffering from a pulmonary embolism. When Faust suggested it could also be costochondritis, a painful inflammation of the cartilage that connects rib to breastbone, ChatGPT countered that its diagnosis was supported by research, specifically a 2007 study in the .

Then it offered a citation for a paper that does not exist. 

The AI platform has great potential for use in medicine, but has huge pitfalls, says Jeremy Faust, MD

While the journal is real 鈥 and a few of the researchers cited have published in it 鈥 the bot created the citation out of thin air, Faust wrote. 鈥淚鈥檓 a little miffed that rather than admit its mistake, Dr. OpenAI stood its ground, and up and confabulated a research paper.鈥

Confronted with its lie, the AI 鈥渟aid that I must be mistaken,鈥 Faust wrote. 鈥淚 began to feel like I was and that the computer was HAL-9000, blaming our disagreement on 鈥榟uman error.鈥欌

Faust closed his computer.

A scene from 鈥2001: A Space Odyssey,鈥 in which a computer commandeers a space voyage. A Boston emergency room physician who watched recently as a modern AI created a fake medical study to support its diagnosis, said he felt like the astronauts in the movie.  (Transcendental Graphics/Getty Images)

鈥楶roof of original work鈥

Such bugs haven鈥檛 stopped educators from test-driving these tools for students and, in a few cases, for professionals.

Last December, just days after Open AI released ChatGPT, Perlman, the Suffolk dean, presented it with a series of legal prompts. 鈥淚 was interested in just pushing it to its limits,鈥 he said.

Perlman transcribed its mostly respectable replies and co-authored a with the chatbot.

Andrew M. Perlman

Peter Gault, founder of the AI literacy nonprofit Quill.org, which offers a free AI tool designed to help , said that even if teachers think things are moving fast this winter, the reality is that they are moving even faster than they seem. Case in point: An online 鈥減rompt engineering鈥 channel on the social platform Discord, devoted to helping students improve their ChatGPT requests for better, more accurate results, now has about , he said. 鈥淭here are tens of thousands of students just swapping tips for how to cheat in it,鈥 he said.

Gault鈥檚 nonprofit, along with , has already debuted that helps educators sniff out the more formulaic writing that AI typically generates. 

While other educators have suggested that future ChatGPT versions could feature a kind of digital watermarking that identifies cut-and-pasted AI text, Gault said that would be easy to circumvent with software that basically launders the text and removes the watermark. He suggested that educators begin thinking now about how they can use tools like Google Docs鈥 version history to reveal what he calls 鈥減roof of original work.鈥

Peter Gault, founder of Quill.org, talks to students. Gault鈥檚 nonprofit uses AI to help students improve their writing. (Courtesy of Peter Gault)

The idea is that educators can see all the writing and revising that go into student essays as they take shape. The typical student, he said, spends nine to 15 hours on a major essay. Google Docs and other tools like it can show that progression. Alternatively, if a student copies and pastes an essay or section from a tool like ChatGPT, he said, the software reveals that the student spent just moments on it.

鈥淲e have these tools that can do the thinking for us,鈥 Gault said. 鈥淏ut as the tools get more sophisticated, we just really risk that students are no longer really investing in building intellectual skills. It’s a difficult problem to solve. But I do think it’s worth solving.鈥

鈥楻esistance is futile鈥

Minnesota鈥檚 Schwarcz flatly said law schools must train students on tools like ChatGPT and its successors. These tools 鈥渁re not going away 鈥 they’re just going to get better,鈥 he said. 鈥淎nd so in my mind, ultimately as educators, the fundamental thing is to figure out how to train students to use these tools both ethically and effectively.鈥

Perlman also foresees law schools using tools like ChatGPT and whatever comes next to train lawyers, helping them generate first drafts of legal documents, among other products, as they learn their trade.

In the end, AI could streamline lawyering, allowing attorneys to spend more time practicing 鈥渁t the top of their license,鈥 Perlman said, engaging in more sophisticated legal work for clients. This, he said, is the part of the job lawyers find most enjoyable 鈥 and clients find most valuable.

Eamonn Fitzmaurice/成人抖阴

It could also make such services more affordable and thus more available, Perlman said. So even as educators focus on the technology鈥檚 threat, 鈥淚 think we are quickly going to have to pivot and think about how we teach students to use these tools to enable them to deliver their services better, faster and cheaper in the future.鈥漃erlman joked that the best way to think about the future of AI in the legal profession is to remember that old 鈥淪tar Trek鈥 maxim: 鈥 ‘.’ This technology is coming, and I think we ignore it at our peril 鈥 and we try to resist at our peril.鈥

Help fund stories like this.

Republish This Article

We want our stories to be shared as widely as possible 鈥 for free.

Please view 成人抖阴's republishing terms.





On 成人抖阴 Today