Prediction
AI changes everything…and nothing
Name
Cindy Royal
Excerpt
“There will be mishaps and confusion. Same as it ever was.”
Prediction ID
43696e647920-24
 

In the final discussion post in my digital media innovation capstone course this semester, I asked the question, “What are your thoughts on the future of innovation?” It was no surprise that every student mentioned artificial intelligence. Many expressed concerns about ethical implications and its effect on future employment. But in most cases, their comments were balanced with a healthy sense of optimism and involvement. As one said, “Remaining up-to-date, never-ending education, flexibility, and active involvement in ethical issues related to technology will be necessary for success in the changing professional environment.”

As I have begun talking about AI over the past several months — in classes, at conferences, and in conversation with friends and colleagues — I keep repeating that we are going to look back at the past couple decades of search as the dark ages of information. “Remember when we had to Google stuff and then go to websites and read them and hope they had the answers to our questions?” Google’s algorithm that now gives us excellent-quality search results will feel as antiquated as a MySpace Top 8 when we are able to have a conversation with a bot that seemingly knows everything. These all-knowing platforms are now being referred to as “artificial general intelligence.”

We’ll also look back on this time when we gladly gave up volumes of personal information to search and social media companies in exchange for the value we perceived in using them. But will we also remember that we didn’t solve the problems of misinformation, bias, and abuse when we had the chance? AI just exacerbates these dilemmas.

Looking forward, as we talk about AI, we have to consider how it will affect the ways that information is stored and distributed. Now we have volumes of public content that are used to train AI platforms, created by millions of people. But if we no longer need to go to a website to get information, will many websites become unnecessary? If so, then what will be training the AI? What will be the format of the data? Will the presentation of the remaining web spaces need to be more fluid and customized? What will the platforms of the future be? And who will be in charge of them? Who will have the skills to work in these fields? And how will media education adapt? We have to look a few paces ahead.

So, what is my prediction for 2024? AI will become more accessible and more useful, like search and social media. We will gladly give away all our private information in exchange for the value we perceive in using it. We’ll use AI platforms to write emails, contribute to stories, edit copy, analyze and present data, create graphics, prepare college papers, learn to code…maybe even write our Nieman Lab predictions.

We’ll also worry about technology taking our jobs. Spreading falsehoods. Information bias and takeovers by malicious actors.

We can’t predict what AI will look like in a year. But we have an idea of where this is going, because we’ve been there. There will be company shakeups, new platforms, emergent players. There will be ethical, social and legal implications. There will be mishaps and confusion. Same as it ever was.

Maybe the stakes are higher now, with a technology so few understand and in which so few have control. My best advice is not to avoid it. Get knowledgeable, but be critical. How we should have been all along

Cindy Royal is a professor and director of the Media Innovation Lab at Texas State University.

In the final discussion post in my digital media innovation capstone course this semester, I asked the question, “What are your thoughts on the future of innovation?” It was no surprise that every student mentioned artificial intelligence. Many expressed concerns about ethical implications and its effect on future employment. But in most cases, their comments were balanced with a healthy sense of optimism and involvement. As one said, “Remaining up-to-date, never-ending education, flexibility, and active involvement in ethical issues related to technology will be necessary for success in the changing professional environment.”

As I have begun talking about AI over the past several months — in classes, at conferences, and in conversation with friends and colleagues — I keep repeating that we are going to look back at the past couple decades of search as the dark ages of information. “Remember when we had to Google stuff and then go to websites and read them and hope they had the answers to our questions?” Google’s algorithm that now gives us excellent-quality search results will feel as antiquated as a MySpace Top 8 when we are able to have a conversation with a bot that seemingly knows everything. These all-knowing platforms are now being referred to as “artificial general intelligence.”

We’ll also look back on this time when we gladly gave up volumes of personal information to search and social media companies in exchange for the value we perceived in using them. But will we also remember that we didn’t solve the problems of misinformation, bias, and abuse when we had the chance? AI just exacerbates these dilemmas.

Looking forward, as we talk about AI, we have to consider how it will affect the ways that information is stored and distributed. Now we have volumes of public content that are used to train AI platforms, created by millions of people. But if we no longer need to go to a website to get information, will many websites become unnecessary? If so, then what will be training the AI? What will be the format of the data? Will the presentation of the remaining web spaces need to be more fluid and customized? What will the platforms of the future be? And who will be in charge of them? Who will have the skills to work in these fields? And how will media education adapt? We have to look a few paces ahead.

So, what is my prediction for 2024? AI will become more accessible and more useful, like search and social media. We will gladly give away all our private information in exchange for the value we perceive in using it. We’ll use AI platforms to write emails, contribute to stories, edit copy, analyze and present data, create graphics, prepare college papers, learn to code…maybe even write our Nieman Lab predictions.

We’ll also worry about technology taking our jobs. Spreading falsehoods. Information bias and takeovers by malicious actors.

We can’t predict what AI will look like in a year. But we have an idea of where this is going, because we’ve been there. There will be company shakeups, new platforms, emergent players. There will be ethical, social and legal implications. There will be mishaps and confusion. Same as it ever was.

Maybe the stakes are higher now, with a technology so few understand and in which so few have control. My best advice is not to avoid it. Get knowledgeable, but be critical. How we should have been all along

Cindy Royal is a professor and director of the Media Innovation Lab at Texas State University.