IMSc Conference on IT Security

We organize a conference for collecting IMSc points in the context of the IT Security BSc course in the spring semester of the 2023/24 academic year at BME. Beyond IMSc point collection, the goal of the conference is to encourage students to deep-dive into some hot topics of IT security, to get familiar with the challenges and recent research results, and to share knowledge with other students in the form of short presentations. We do hope that the conference will shed light on the beauty of the field of IT security and some of its exciting research areas, and it will stimulate both the active participants of the conference and all other students enrolled in the IT security course to engage in further studies in the domain of IT security.

The Call for Papers (CfP) for the conference is available here.

Conference topics

all, uav, cyber-physical-system, vehicle, network-security, power grid, machine-learning, data-evaluation, privacy, economics, malware, binary-similarity, cryptography, machine-learning-security, LLM-security, LLM, copilot, federated-learning, poisoning, password-manager, AAA, OAuth, web-security, Kerberos

Prompt Injection in LLM

Large Language Models (LLMs) are a new class of machine learning models that are trained on large text corpora. They are capable of generating text that is indistinguishable from human-written text. The increasing reliance on Large Language Models (LLMs) across academia and industry necessitates a comprehensive understanding of their robustness to prompts. There exist several attacks that create adversarial prompts against LLMs.

Tags: machine-learning, machine-learning-security, LLM-security, LLM

References:

Poisoning Code Completion Models (CoPilot)

Large Language Models (LLMs) are a new class of machine learning models that are trained on large text corpora. They are capable of generating text that is indistinguishable from human-written text. One of their most popular application is code completion, where the model completes the source code written by a developer. Developers are found to code up to 55% faster while using such tools. Among these tools, GitHub Copilot is by far the most popular. GitHub Copilot leverages context from the code and comments you write to suggest code instantly. With GitHub Copilot, you can convert comments to code autofill repetitive code, and show alternative suggestions. However, GitHub Copilot is trained on public repositories, and therefore, it is vulnerable to data poisoning; a bad actor may intentionally contaminate the training dataset with malicious code that may trick the model into suggesting similar patterns in your code editor.

Tags: machine-learning, machine-learning-security, copilot, LLM-security

References: