Many valid concerns regarding AI have been expressed by its skeptics, though the concern that is personally nearest and dearest to my heart is that of censored content.
Artificial intelligence systems often contain guardrails to prevent users from generating content that is deemed by the AI creators to be dangerous. Some of this I certainly appreciate; for instance, I’m delighted that many AI systems do not use profanity, and do not generate pornography or gratuitously violent imagery.
However, I’m less enthusiastic about political censorship, and censorship in the marketplace of ideas. For instance, in the United States, most people do not believe that Lee Harvey Oswald was solely responsible for the assassination of JFK, but the AI systems still generally give a vague answer as to who was responsible — essentially dodging the question altogether, in spite of decades of research that have been devoted to the subject. If AI generates all of our content, will we only be allowed to think politically sanctioned thoughts?

Addressing the Issue of Censorship
At Pallas, we are working on this issue first and foremost by ensuring our artificial intelligence system has “read” the best research from the most diligent and qualified journalists, researchers, and historians.
Of course, “best” is subjective, and so the issue of censorship is somewhat insurmountable: for instance, as I mentioned, I appreciate AI systems that avoid pornography and profanity — but I’m sure many would not share that sentiment, especially in certain scenarios.
Likewise, I’m sure that many would disagree with the journalists we’ve defined as the true, authoritative experts on the post World War II history topics that we seek to create educational courses on. Who is right?
Ultimately, it will be upon each student and family to decide for themselves what is the right solution for them. I do believe that as AI systems develop, we are likely to have more options available to us, and thus the opportunity to find ones that match our preferences — provided, of course, we remain vigilant in seeking to do so.
As we move forward into an AI-driven future, it’s crucial that we remain aware of these potential pitfalls. The power of AI to generate and filter content is immense, but it comes with the risk of limiting the diversity of thought and expression. By consciously working to preserve a wide range of perspectives and encouraging critical thinking, we can hope to reap the benefits of AI while mitigating its potential for censorship.

