Artificial Intelligence, humanism and obscurantism

While scientists are producing vaccines and medicines with AI, engineers developing software and geneticists deciphering new genes, some professionals in the area of the humanities have become obsessed with undertaking a witch hunt to hunt down those who use Artificial Intelligence.

From apocalyptic discourses around algorithms to the express prohibition of the use of AI tools in certain environments, it is very likely that social academics are wasting a great historical opportunity to exponentially transform knowledge in the humanistic field.

Scholarly Scholars and InquisitorsBy: Gabriel E. Levy B.

In recent days I met a teaching colleague, who very excitedly told me that he discovered an AI application that he uses to identify the texts that his students give him written with AI.

According to his passionate story, this application is one hundred percent reliable and has “cracked” practically the entire course for using AI.

It was inevitable for me to remain silent:

“If you intend to hunt down the use of AI tools because they prevent students from using their own competencies, isn’t it a bit contradictory that you yourself use AI for this purpose? By example, it should be your own language skills that discover that your students are using AI, otherwise you are sending a rather confusing message.”

My friend’s face was not the most accommodating and since that day he has not answered my messages again and that is the reason why I decided to write this article, which is totally different from what you are used to reading in this space, which is why I express in advance my most sincere apologies.

I want to clarify that, for me, it is indisputable that in academic environments it is necessary to promote critical thinking, the creation of one’s own ideas and the academic production of students. Therefore, it is not at all pedagogical to let a student submit a work written by AI as a substitute mechanism for his or her own intellectual production.

However, it is also not logical that Artificial Intelligence tools are pursued as if they were the enemy. I believe that the challenge of the contemporary teacher is to incorporate them into the classroom, ensuring that they enhance the student’s reasoning, not replace him.

But academia is not the only field of the humanities where the obscurantist exercise of witch-hunting is occurring.

Virtually every newsroom in the world has imposed fact-checking tools on editors, hunting down any journalist who uses AI. And it is not only a matter of persecuting the writer 100%, but any use of AI: a mere 10% can cause the dismissal of a journalist in a newsroom.

I do not dispute that allowing AI to write a complete journalistic story is not only inadequate, it is dangerous.

The inability of models such as ChatGPT, Gemini or Grok to differentiate fiction from reality is a latent threat to journalism. Of course, the journalist’s pen and nose, the contrast of sources and his ethics are still the best strategy to get closer to the truth.

But it is quite another thing to totally prohibit its use when it can help to better develop an idea, expand a concept, correct style, improve grammar, avoid repeated words, ignore key aspects of the news for races, among thousands of other possible uses.

My personal experience

Between 2017 and 2023 I produced 735 articles written for this space, without the help of any generative AI tool, basically because they did not exist, except for Word’s spell checker, which will always be useful.

If I had been proposed in those years to use a tool to help me write, I would have replied that I would never accept it because the articles would lose my style, however, the arrival of Chat GPT in 2023 unexpectedly simplified my work.

 Without compromising the content, the development of ideas, much less the sources or information, I leverage Generative AI to improve my texts, expand concepts, develop ideas, without allowing AI to define or capture the content, only what my school teacher called “The Carpentry”, not only decreases the time of construction of the text,  but it reduces the unnecessary cognitive load on my brain.

When running my articles through the tool my colleague mentioned, on average “65% of my texts are written by humans and 35% by AI.” (enough to get me fired from the newsroom of any media outlet).

These figures confirmed to me that I would never use this particular tool with my students, because it is not as accurate as my colleague suggested, since it is far from evaluating what is really important: The originality of the ideas, and instead evaluates is the way in which they were expressed, a variable that in my opinion should only be of high relevance in a course of “Journalistic Writing” or similar.  as for example in law if what you are looking for is to evaluate the quality of the drafting of a lawsuit, there is no doubt that the tool is very valuable.

The key is in ETHICS

The real underlying problem that I want to highlight is that while epidemiologists have accelerated scientific research by decades without compromising the quality of their work, achieving drugs and vaccines that would have taken half a century to develop, humanists are more concerned with pursuing AI tools than with implementing them properly to enhance their work.

Both epidemiologists and humanists have a great ethical responsibility, they could use AI to create diseases or make a virus more contagious, however, they are not ceasing to use AI in the name of ethical risks, unlike human areas, where the use of Artificial Intelligence is pursued to prevent it from being misused.

I honestly can’t imagine an epidemiologist criticizing a colleague because he used AI to identify the ARL of viruses, instead of using his brain to decipher it, much less imagine the Nobel committee withdrawing the award from David Baker for predicting the three-dimensional structure of proteins using AI, in fact, they awarded him the Nobel Prize in Chemistry:  “for using AI in innovative ways”.

I also don’t think that ChatGPT or Perplexity should solve a court case, but I am convinced that they can greatly simplify the work of a Court in the systematization of information and thus avoid many human errors typical of the overworkload and the bias of human subjectivity, the meaning of the sentence must be determined by the judge.  but much of the process can be streamlined by AI.

I don’t think AI should replace a psychologist either, but it can help many people for their treatments and therapies.

People are learning languages with AI, software developers are simplifying algorithm codes, engineers are improving their structural calculations and, in general, Artificial Intelligence applications are enhancing our abilities to produce knowledge.

The systematization of an ethnographic immersion could reduce the work of an anthropologist by years, predict the next pandemic, combat climate change and, well used, avoid the next world war.

Every tool created by human beings, from the wheel to the printing press, the cell phone, the Internet or AI, are powerful instruments that transform our society and our environment, in any case, the challenge is to make the best possible use of them, avoiding falling into dogmatism.

Let us remember that for decades movements to burn printing presses were financed, because they threatened the medieval cultural industries, by the way, an attitude very similar to that of my colleague.

Finally, of the 1,431 articles I have written so far, I reiterate that this is the only one that I decided to write in the first person, breaking my neutral style, due to its remarkable editorial load, which I was unable to capture in this text, as a reflection and which of course only compromises my way of thinking.

I also clarify that, to avoid a conflict of interest to the generative AI that I respect and appreciate so much, this article was written one hundred percent by a human and although it would be much “purer”  to use a typewriter, I must confess that it was inevitable for me to write it in the Microsoft Word program, which is finally also an algorithm. One more reason for my colleague to stop responding to my messages.