Google Removed Over 1.11 Lakh Harmful Content in June Under New India IT Rules
Google asserted that some of our products will make use of automatic identification processes in order to prevent the spread of harmful information, such as child sex abuse material and violent extremist content.
This report mentions the complaints received by Google and the action taken on it during the specifi ed one- month reporting period. The actions were taken as a result of automated detection mechanisms used by Google's SSMI platforms.
The period captures information from June 1 to June 30. Google might publish more monthly transparency reports in the coming months.
Some requests, according to Google, might allege the violation of intellectual property rights, while others might claim that local laws restricting the publication of particular types of material due to things like defamation had been broken.
In addition to what our users report, we substantially spend in battling dangerous information online and employing technology to detect and remove it from our platform", the company said in its monthly compliance report.
The company added its automatic identification procedures resulting in the removal of 528,846 accounts nationwide. We invest a lot of money to fight harmful internet content, and we use technology to track it down and remove it from our platform.
Google removed 1,11,493 harmful pieces of content in June of this year in accordance with the new India IT Rules, 2021.
According to Google's Monthly Transparency Report, the majority of the content that was removed fell under the category of copyright infringement, with the rest falling under other categories like trademarks, court orders, explicit sexual material, fraud, and others.
Within the same time frame, the internet company received 32,717 complaints from citizens of the country about external content on different Google platforms that they believed to violate their personal or regional legal rights. Numerous classifications can be made of the complaints.