How Artificially Intelligence Videos and Pictures Affect Research

The arrival of AI technology has marked a seismic shift in the world of research. When first introduced, many heralded artificial intelligence as the technology that could fully automate this technology and take it into the modern world. While that might have been the intent, many became more concerned with the dangers of AI.

Rather than being intrepid scholars looking to do more research, many of the early adopters of AI were people who hoped to get around research requirements. This included students cheating in their assignments and even professionals using AI to fudge research citations, intentionally or not.

That was with chatbots, but now that the ability to create entire videos or images through AI, concerns have resurfaced about the dangers of this technology. Now, new debates are beginning once again as people try to figure out the full effects of AI-generated images and film on the research process.

How Do AI Image Models Work?

For a model to create an image, AI tools such as Midjourney and DALL-E, rely on machine-learning algorithms called diffusion models. These models work by being trained with millions of images gathered from the internet along with text descriptions accompanying that text. They will absorb all that knowledge and use that in their model.

This way, when someone types a prompt for an image, the AI will use its database to find a similar description and build an image with that information. Images have been a tricky subject for AI because of its difficulty in deciphering images, especially those without text.

The result is that images and videos tended to be rough-looking when they first came out. Many of them would have flaws in their appearance from off proportions to disfigured characters. However, the release of new models and the advancement of technology have allowed some models to become quite good at creating both videos and images. While there still are issues, they are far less noticeable than they were before.

Have Researchers Adopted AI?

Despite the pushback by many research groups, some have adopted a more cautiously optimistic route by allowing limited usage of AI images and videos in the research process. These are primarily placed in scientific papers or social media posts to visualize what the researcher is trying to say, citing some benefits it has.

“They are using tools like DALL-E 3 for generating nice-looking images to frame research concepts. I gave a talk last Thursday about my work and I used DALL-E 3 to generate appealing images to keep people’s attention.”

Juan Rodriguez, Researcher at ServiceNow Research in Montreal, Canada.

These have been useful in providing visual aids for research and sparking the interest of casual readers about what is being said. The biggest benefit is the time it saves for researchers. Because they do not have to find the images themselves, researchers have more time for other tasks.

This is especially useful for designs as well as other visual aids like product tables and even presentations. They can help beyond making the images from scratch as some AI tools have been used to translate and improve the quality of images, something that could be very time-consuming through conventional methods.

In terms of AI video, usage has been muted as fewer researchers use video-making technology outside of those who are actively studying it. However, there are signs that this might be changing due to the release of Sora by OpenAI.

Are There Risks With Using AI Tools in Research?

While there are undoubtedly some benefits, there are also several risks that come with these tools. The big problem is that the AI-generated images cannot always get the details of illustrations right, specifically texts. There have been many cases of the text font, size, and spelling being incorrect. This is further compounded by the fact that you cannot simply edit the text and need to generate a new image of things that do not look good.

The same is true with the details of the image which can appear exaggerated or disproportional. A recent example of this was a paper published last February titled, Frontiers in Cell and Developmental Biology. During this paper, researchers used Midjourney to create the image of a rat. The problem arose when the image in the paper showed a cartoon rat with oversized genitalia and was annotated with gibberish.

The fact that it was published and looked like that caused a stir about the use of AI in the writing process and led others to call for stricter requirements.

While this might have been a humorous accident, other risks are far less amusing. People have cited the dangers of this happening to tables and images generated by AI. There is no telling how accurate these can be as there is currently no formal method for detecting such images.

Unsurprisingly there has been backlash from research groups who remain skeptical about its practical uses. For example, a research pole was conducted on Twitter, Facebook, and Instagram that surveyed the views of around 90 paleontologists. They were asked about their views on the accuracy of AI-generated images of ancient life. The poll showed that only one in four researchers believe that AI should be allowed in publications.

Their main issue seems to be that these images tend to be inaccurate and copy existing images. However, it can’t read new papers or research so there is no way to train it to be more accurate. The other issue is that they question whether the AI can accurately depict different sections of a fossil which is important in fields such as palaeontology.

How Are Research Groups Dealing With This Issue?

To try and combat this issue, different publications have launched different countermeasures. Springer Nature has taken the harshest stance and banned all AI usage in its publications. Others like Journals of Sciences will only allow AI images and text if it has permission from the editor. PLOS ONE will allow AI tools, but the researcher must cite which tools they are using and how it is used before being subjected to a review.

Our geniusOS research and programming team has been following these trends closely to understand the dangers and potential benefits of AI usage in different fields. We hope to use this information to learn more about the pitfalls of integrating AI into our company and ways we can minimize the danger.