Posts

Showing posts with the label AI

AI and cyber security: what you need to know

Image
  Understanding the risks - and benefits - of using AI tools. Ignited by the release of ChatGPT in late 2022, artificial intelligence (AI) has captured the world's interest and has the potential to bring many benefits to society. However, for the opportunities of AI to be fully realised, it must be developed in a safe and responsible way, especially when the pace of development is high, and the potential risks are still unknown. As with any emerging technology, there's always concern around what this means for security.  This guidance is designed to help managers, board members and senior executives (with a non-technical background) to understand some of the risks - and benefits - of using AI tools . Managers don't need to be technical experts, but they should know enough about the potential risks from AI to be able to discuss issues with key staff. What is artificial intelligence? Artificial intelligence  (AI) can be described as  ‘Any computer system that can perform task

Microsoft Releases PyRIT - A Red Teaming Tool for Generative AI

Image
Microsoft has released an open access automation framework called  PyRIT  (short for Python Risk Identification Tool) to proactively identify risks in generative artificial intelligence (AI) systems. The red teaming tool is designed to "enable every organization across the globe to innovate responsibly with the latest artificial intelligence advances," Ram Shankar Siva Kumar, AI red team lead at Microsoft,  said . The company said PyRIT could be used to assess the robustness of large language model (LLM) endpoints against different harm categories such as fabrication (e.g., hallucination), misuse (e.g., bias), and prohibited content (e.g., harassment). It can also be used to identify security harms ranging from malware generation to jailbreaking, as well as privacy harms like identity theft.   PyRIT comes with five interfaces: target, datasets, scoring engine, the ability to support multiple attack strategies, and incorporating a memory component that can either take the form

Microsoft AI Researchers Accidentally Expose 38 Terabytes of Confidential Data

Image
Microsoft on Monday said it took steps  correct a glaring security gaffe that led to the exposure of 38 terabytes of private data. The leak was discovered on the company's AI GitHub repository and is said to have been inadvertently made public when publishing a bucket of open-source training data, Wiz said. It also included a disk backup of two former employees' workstations containing secrets, keys, passwords, and over 30,000 internal Teams messages. The repository, named " robust-models-transfer ," is no longer accessible. Prior to its takedown, it featured source code and machine learning models pertaining to a  2020 research paper   titled  "Do Adversarially Robust ImageNet Models Transfer Better?" "The exposure came as the result of an overly permissive  SAS token  – an Azure feature that allows users to share data in a manner that is both hard to track and hard to revoke," Wiz said in a report. The issue was reported to Microsoft on June 22,