How to Be a Student in the Age of AI

Zachary BiondiAugust 23, 2024AI and the Future of KnowledgePerSpectives
How to Be a Student in the Age of AI

In March 1811, a group of roughly 200 disgruntled textile workers gathered near Nottingham, England.

Factory owners were using new technologies to justify firing framework knitters. The new power loom also removed the skill from their craft. It was used to produce cheap, poor quality garments.

So, in the middle of the night, hammers in hand, the knitters entered a textile factory and smashed the new machines to pieces.

Machine breaking soon became a movement throughout the English Midlands. Organized under their mythical leader Ned Ludd, knitters who opposed the adoption of power looms became known as the Luddites.

When the word “luddite” is used today, it means someone who opposes technological progress. History tells us that luddism is more nuanced. The original Luddites did not oppose technology — after all, garment workers used machines called stocking frames. And they did not oppose progress. Their concern was that new technologies served the interests of the wealthy few and left communities in shambles. Luddites opposed what they called “obnoxious” machines. New technology is only progress when it genuinely enriches lives.

As more companies pursue Artificial Intelligence, some people are voicing concerns that resemble those of the Luddites. The concerns are especially important in education. Well-being is dependent on good education. Communities need good students. Education is also an area that is radically changing due to AI.

If we want to be good students, and if we want to see meaningful progress, what should we think about this new technology? How should we use it? Might AI be an obnoxious machine?

Beneficial or Convenient?

In the 21st century, students typically do their work in front of a computer, with their phones nearby. The question they face is: how best to complete the work? Of course, a student wants a good grade. But from an educator’s perspective, a good grade is meant to represent the fact that the student learned something important. The student understands themselves and the world better.

A Luddite today would ask about how our technologies fit into the project of becoming educated human beings. A student has a range of technologies at their fingertips. AI products are increasingly able to complete student work entirely. Large language models, like ChatGPT and Claude, can produce essays, summaries, and translations. They can solve math problems. A student need only copy and paste.

But AI might be used in other ways. A student can use it to edit something they wrote, find helpful sources of information, or break down a task into manageable steps. Sometimes a large language model or a search engine reveals what requires more attention and thought.

A student can also choose not to use AI products, either on their own initiative or because a teacher is banning them.

What should you do? The Luddite approach is to ask yourself: does using this technology genuinely benefit me and my community, or am I using it merely because it is convenient? Does it help make you a more educated person, increasing your understanding of yourself and the world, or is it a shortcut, a substitute for thinking? Whose interest does using the technology serve?

The answer to this question may not always be obvious or straightforward. You must be humble and honest with yourself. You might want to speak to a teacher about the best way to use new technologies in your education. (Though be patient with them. As a teacher, I can say that we are puzzled by these questions, too.)

However, you can be sure that asking ChatGPT to answer the questions won’t help much.

Stealing or Creating?

Students at all levels are warned about cheating. There are numerous forms of “academic dishonesty,” like plagiarism. Your work needs to be original, your “own.” Students shouldn’t steal others’ ideas. Given this norm, teachers are worried that AI has unleashed an epidemic of cheating in education.

There are a couple ironies here. First, people often say that, unlike a human being, an AI cannot be genuinely creative. Text that a language model generates is merely a product of statistics and data sets. No originality.

If a student turns in an AI-generated paper, what has gone wrong? It isn’t exactly plagiarism, since the text is new. And the student didn’t steal ideas from the AI, since the AI cannot have ideas of its own. It seems the student didn’t take anyone’s ideas.

But, you may think, the AI doesn’t have ideas because it is merely reorganizing human ideas. This is the second irony. Tech companies are facing lawsuits alleging that the process of creating their AI products involves stealing ideas from others. The data sets that the models use comprises original work made by human beings. The companies are profiting from the work without paying for it. Should that be allowed? How should it affect the norms of academic integrity?

What lessons should a student learn here? A Luddite would say that the issue with turning in an AI-generated paper is less about vague concepts like “originality” and more about a student failing to see what the purpose of their education is. The value is in the process of creating the paper. That’s where the intelligence is. If a machine erodes this value, is it obnoxious?

In the writing process, students engage with the ideas of others. No matter what you make, it is based on work that came before. That is unavoidable and good! In doing your work, you come to appreciate the value of other people and how they contribute to our knowledge. You understand yourself more because you see the influence your community has on you.

According to a Luddite, you should do your work in a way that appreciates the value of others. And, with their profits in mind, this is something many powerful AI companies do not do. Being educated means contributing to the community of knowledge, giving back what you received from others. That attitude is the attitude of a good citizen and should guide the development and use of AI, whether in education or elsewhere.

When Will I Use This?

Students often feel motivated to use AI products because they don’t see the value in the work they’re required to do. If an AI can do it, why do you need to know how to do it? Why not automate more education, if we have the technology?

A way to consider this question is to notice that you are always learning more than you realize. John Dewey, an American philosopher and education theorist, said in 1938,

"Perhaps the greatest of all pedagogical fallacies is the notion that a person learns only the particular thing he is studying at the time. Collateral learning in the way of formation of enduring attitudes, of likes and dislikes, may be and often is much more important than the spelling lesson or lesson in geography or history that is learned. For these attitudes are fundamentally what count in the future" (Experience and Education, p. 48).

As they do schoolwork, in addition to learning writing or math, students in the age of AI are developing enduring attitudes about technology: what place it should have in the life of an educated person. The Luddites thought we should think more carefully and critically about new technologies. We should avoid the fallacy that novelty is the same as progress, that being convenient is the same as being beneficial. For the sake of our communities, we can all learn a lesson from them.

Zachary Biondi teaches Philosophy at the University of Illinois Urbana Champaign. He primarily researches Ethics, Ancient Philosophy, and Philosophy of Technology. On the topic of Artificial Intelligence, he is concerned about labor issues, existential risk, and the moral status of machines.