The use of AI technology to create inappropriate images against women has increased

A new study revealed this / screenshot
A new study revealed this / screenshot

The use of apps and websites that create inappropriate fake photos of women with the help of artificial intelligence (AI) technology has increased dramatically in recent months.

This was revealed in a new study.

A study by a company called Graphika stated that in September 2023 alone, 24 million people used websites that produced deepfake images.

Most of the apps or websites using deepfake technology used popular social networks for marketing to attract people.

It should be noted that deepfake is a technology in which a person’s face or voice can be easily replaced with another person, making it appear that it is a photo, video or audio of a person who is not actually him. .

According to the study, the number of links to such apps on social media platforms such as X increased by more than 24 percent during 2023.

According to research, most of such services only work on photos of women.

The researchers said that such content is posted on social media platforms due to which women face problems.

The research also revealed that in X, an app posted a photo and told people that they could create inappropriate photos and send them to the person who had the photo.

One such app also played sponsored content on YouTube.

In this regard, a Google spokesperson said that we do not allow ads that contain inappropriate content and ads that violate policies are deleted.

Such fake photos or videos of movie stars are already going viral on the internet, but experts fear that advances in AI technology will make it easier to use deepfake software while making it harder to catch fake content. will go

According to the researchers, we have seen that ordinary people are being targeted.

The study found that most of the women victims were not aware of the images and those who were aware of it, faced difficulties in taking legal action.

Leave a Comment