OpenAI unveiled on its blog that it can now accept images as input and pass examinations designed for humans like legal bar exams, the LSAT, and the GRE, as well as various math, and even biology and art history exams. “[GPT-4] exhibits human-level performance on various professional and academic benchmarks,” the company added, but noted that it is still being developed. Many organizations are now supplementing their products with GPT-4: Duolingo to deepen their conversations, Be My Eyes for visual object scanning to help those with low vision, Stripe to combat fraud, Morgan Stanley to organize its wealth management, and the nation of Iceland which wants to preserve its language with it. These advances in what OpenAI calls artificial general intelligence (AGI) may be reminiscent of the fictional Skynet neural network in the Terminator film franchise. However, this “life assistant” — as OpenAI puts it in its introductory video — still has its limitations. For instance, it can “introduce security vulnerabilities into the code it produces,” like a human, OpenAI said. “Despite its capabilities, GPT-4 has similar limitations as earlier GPT models. Most importantly, it still is not fully reliable (it ‘hallucinates’ facts and makes reasoning errors),” OpenAI said. But cybersecurity and privacy concerns remain. Dan Lohrmann, CISO and Field Chief Information Security Officer at Presidio, told VPNOverview that entering sensitive company or personal data should be avoided altogether. “Employees must be diligent in not allowing any sensitive, proprietary, confidential or personal data to be entered into ChatGPT,” Lohrmann said.

Cybersecurity and Privacy Implications of GPT-4

Even as AI and tech enthusiasts laud GPT-4’s advancement of deep learning, many cybersecurity and privacy concerns linger. Employees across the globe are increasingly using ChatGPT and similar AI models to aid them in their work, which can allow sensitive company and personal data to make its way into the system. A recent report from cybersecurity firm Cyberhaven noted that a large number of employees across several industries did exactly that. “During the week of February 26 – March 4, workers at the average company with 100,000 employees put confidential documents into ChatGPT 199 times, client data 173 times, and source code 159 times,” Cyberhaven’s report said. Cameron Coles, the author of Cyberhaven’s report, told VPNOverview that questions surrounding the security of collected data still persist, even as the tech improves. “I’ve tried GPT-4 and it is an impressive leap over GPT-3 in terms of capability but questions about data security still remain,” Coles said. “In OpenAI’s 98-page document introducing GPT-4, they proudly declare that they won’t disclose any details about the contents of their training set.” Chief Research Analyst at IT-Harvest, Richard Stiennon, said that user data will almost certainly come into play at some point. “Don’t become the product. While OpenAI has clarified its terms to assure users that they would not use their data, it would still be wise to assume that eventually, it will monetize your questions to at least target you with ads. Especially the Bing version,” Stiennon said.

Hackers, Bad Actors Leverage AI for Scams, Malware Campaigns

Threat actors are already leveraging deep learning neural network LLMs — which mimic how humans learn — like GPT-4 to spread malware and scams via online platforms. Principle Security Engineer at HYAS Jeff Sims and his team created a proof of concept malware they dubbed BlackMamba to demonstrate how threat actors can misuse neural network code synthesis to create new-age malware. Malware like this “could include a whole host of post-exploitation capabilities which are stored in the executable as benign text prompts (most likely as encrypted strings), waiting to be passed to a large language model like GPT-3 [or 4] and synthesized into malicious code,” Sims explained. Threat actors are already leveraging generative AI videos to create fake personas to dupe victims on YouTube. Any AI image generation software — like one of OpenAI’s other products, DALL-E — could feasibly be used to do the same. What is more, threat actors could use the power of ChatGPT’s language processing to create more convincing phishing emails. Such scam emails have long been identifiable due to their poor grammar, spelling mistakes, and other language-related cues. On the other hand, ChatGPT-4 can also be used to improve cybersecurity in the right hands. Reza Rafati, who works at Cyberwarzone.com, noted that he used it to quickly identify dangerous phishing links. “Chat GPT-4 was able to detect these variations and correctly identify the bad links as malicious,” Rafati said. Potentially, this technology could be tuned to detect ransomware intrusions and malicious network activity.

Building AI Responsibly

A custom supercomputer was co-designed by OpenAI — reportedly costing hundreds of millions of dollars — and its partner Microsoft’s Azure division just to handle the GPT-4 workload. Thus, generative AI seems to be the future and is now a booming sector, but we should not label it as sentient just yet. In February 2023, Google launched its own AI chatbot called Bard, which uses a different language model dubbed LaMDA to rival ChatGPT. Meanwhile, other Big Tech giants like Alibaba, Microsoft, Huawei, and Baidu are also developing their own versions. In this case, just a few hours after launch, GPT-4 testers have reported being able to describe images, generate recipes, code video games, create websites, and more — a big step up from GPT-3. At the moment, due to high demand, users wishing to try the new technology can get on the GPT-4 waiting list and subscribe to ChatGPT Plus for 20 dollars a month for premium access. Building AI technology with cybersecurity, privacy and social responsibility from the ground up is key. At the end of the day, and in OpenAI’s own words, it is a tool that “outperforms humans at most economically valuable work.” “OpenAI’s mission is to ensure that artificial general intelligence benefits all of humanity. We, therefore, think a lot about the behavior of AI systems we build in the run-up to AGI, and the way in which that behavior is determined,” OpenAI said in another of its blog posts, adding that GPT-4 is now much less likely to produce offensive answers or dangerous suggestions.

OpenAI GPT 4 is Future of AI  But Security Concerns Remain - 22OpenAI GPT 4 is Future of AI  But Security Concerns Remain - 29OpenAI GPT 4 is Future of AI  But Security Concerns Remain - 95OpenAI GPT 4 is Future of AI  But Security Concerns Remain - 75