New Zealand is reportedly planning a new system to identify users showing signs of violent extremism on AI platforms and help them by guiding those users towards specialist support services. According to Reuters, this move follows how AI tech giants like OpenAI, Google, and Anthropic are facing rising concerns about how their AI tools are used online. Reportedly, OpenAI was warned by the Canadian government earlier this year, after it allegedly banned a school shooter from its platform without informing the government. Rising Pressure On AI FirmsAccording to Reuters, AI companies are facing extreme pressure from regulators and governments to prevent the misuse of AI tools. These are concerning matters that require safety measures to stopharmful
behaviour online. The report notes that the upcoming system in New Zealand will be developed by ThroughLine. It is a startup that previously worked with OpenAI, Google, and Anthropic. ThroughLine helps connect users who are facing mental health issues with local support services. This company has a global network of more than 1,600 helplines across 180 countries. Moreover, the platform is also in discussions with The Christchurch Call, a global effort launched after the 2019 New Zealand terror attacks. This group is likely to help the authorities to guide the development of the new system. (This is a developing story; check for more updates)
/images/ppid_59c68470-image-177510253554478482.webp)
/images/ppid_a911dc6a-image-17749340328743672.webp)
/images/ppid_a911dc6a-image-177496558695985812.webp)
/images/ppid_a911dc6a-image-177511252717569603.webp)
/images/ppid_a911dc6a-image-177510903702894885.webp)
/images/ppid_a911dc6a-image-177518956904558357.webp)
/images/ppid_a911dc6a-image-177512413246876102.webp)
/images/ppid_a911dc6a-image-177501452349916989.webp)


/images/ppid_a911dc6a-image-177513355897899036.webp)
/images/ppid_a911dc6a-image-177497953361328662.webp)