Exploring the projects and publications of IMAGIN Lab
Real-time demonstration of LLMs detecting cyber threats in network traffic deployed in Kubernetes Cluster
A showcase of a trained LLM for automatic NIDS rule generation
A system performs joint placement and chaining of virtual network functions (VNFs) based on a genetic algorithm in response to a request for virtual network services, including an in-line service. The request includes a description of a virtual network of VNFs and virtual links connecting the VNFs. A description of a physical network including servers and physical links is provided. Each chromosome in a population encodes a mapping between the virtual links enumerated to form a locus and a corresponding sequence of server pairs. Each chromosome is evaluated against objective functions subject to constraints to identify a chromosome as a solution. The VNFs are placed on the servers according to the mapping encoded in the identified chromosome. According to the mapping, each VNF is mapped to one of the servers and each virtual link is mapped to a path composed of one or more of the physical links.
This study presents a comparative analysis of Machine Learning (ML) and Large Language Models (LLMs) for Cyber Threat Detection. We evaluate the performance of various ML algorithms (e.g. Random Forest, Gradient Boosting) and fine-tuned LLM algorithms (e.g. LlaMA3, Falcon) on multiple datasets, considering metrics such as F1-score, real-world applicability, explainability, interpretability, scalability, and adaptability to evolving threats. Our results show that while ML models often have strong performance and interpretability, LLMs show the potential for high accuracy, especially when dealing with complex hazard patterns. However, the computational requirements and ambiguities associated with LLMs present challenges to widespread adoption. To maximize the benefits of both approaches, we propose several future research directions leveraging both techniques. Future research should focus on improving the interpretability of LLM, reducing the computational cost, and building a synergistic solution harnessing ML models and LLMs.
Despite getting slighter in weights and smaller in sizes, camera-bearing UAVs are extensively used in countless applications. This has posed significant safety and security challenges. The necessity surges in security made researchers come up with solutions providing instantaneous safety. However, modern cryptographic primitives tend to require more power to be in action, which is not compatible with resource-constrained UAVs. In this paper, we propose a lightweight and versatile chaos-driven cryptosystem featured by minimum resource usage and adjustable randomness. It reaps the benefit of three combinational enigmas (Rubik’s cube, Sudoku and Scytale) together with some inherent customization such as the novel permutation/substitution mixing, the chaotic Rijndael and hashing, dynamic key-dependent substitutions and the Serpent-like cascade encryption. The paper presents a case study on image confidentiality, whose findings can be extended to protect the privacy of visual vista exposed by the cameras of UAVs and prevent intercepting their footage. The experimental analyses carried out in comparison with other native/customized algorithms in the field, proved the high security margin of the proposed cryptosystem, its great sensitivity to the slightest alteration, strong robustness against cryptanalytic attacks, and namely its competency for securing aerial photography against diverse attacks with high success rates.
Recently, UAV-mounted mobile base stations (UAVs-MBS) have gained significant attention as an effective solution for providing essential wireless communication services and connectivity to ground users (GUs) in various environments. Smart antenna technologies are vital for both current and next-generation communication networks, particularly in 5 G and beyond. However, implementing smart antenna-assisted UAVs-MBS networks presents a complex multi-objective optimization problem, and traditional techniques often fall short in efficiency and inclusivity. In this study, we propose a viable hybrid method to tackle the challenge of minimizing the number of smart antenna-enabled UAVs-MBS required to achieve higher Line-of-Sight probabilities and coverage levels, while also determining the optimal 3D coordinates for a group of dispersed GUs. Our approach utilizes a low-complexity multi-objective algorithm called SERAPH, which incorporates a novel two-stage hybrid evolutionary algorithm alongside a multi-criteria decision-making method. Additionally, SERAPH features 3D multi-beamforming and coordination mechanisms to enhance wireless coverage and effectively extend the system’s lifespan. We validate the effectiveness of SERAPH through comprehensive comparative analyses with various state-of-the-art algorithms. Our results demonstrate that the proposed method significantly outperforms existing approaches in terms of accuracy and efficiency.
This work addresses the growing cybersecurity risks in microservice-based systems, particularly in Kubernetes environments, where traditional defenses struggle against AI-driven attacks. We propose Sec-Llama, a compact and memory-efficient large language model (LLM) for network intrusion detection. Using a byte-based data transformation approach, Sec-Llama achieves a 96% F1-score while operating with only 172 MB memory and 21.1 ms inference latency. An accompanying application was developed to monitor, train, and deploy the model for real-time intrusion detection.
Edge computing-based microservices (ECM) are pivotal infrastructure components for latency-critical applications such as Virtual Reality/Augmented Reality (VR/AR) and the Internet of Things (IoT). ECM involves strategically deploying microservices at the network’s edge to fulfill the low latency needs of modern applications. However, achieving efficient resource and energy consumption while meeting the latency requirement in the ECM environment remains challenging. Dynamic Voltage and Frequency Scaling (DVFS) is a common technique to address this issue. It adjusts the CPU frequency and voltage to balance energy cost and performance. However, selecting the optimal CPU frequency depends on the nature of the microservice workload (e.g., CPU-bound, memory-bound, or mixed). Moreover, various microservices with different latency requirement can be deployed on the same edge node. This makes the DVFS application extremely challenging, particularly for a chip-wide DVFS implementation for which CPU cores operate at the same frequency and voltage. To this end, we propose GAS, enerGy Aware microServices edge computing framework, which enables CPU frequency scaling to meet diverse microservice latency requirement with the minimum energy cost. Our evaluation indicates that our CPU scaling policy decreases energy consumption by 5% to 23% compared to Linux governors while maintaining latency requirement and significantly contributing to sustainable edge computing.
This work tackles the growing cyberattack risks faced by SMEs and organizations, highlighting the limitations of current intrusion detection systems and existing LLM-based approaches. We propose a novel LLM-powered rule generation method for Generative AI-based Network Intrusion Detection Systems (NIDSs). Using fine-tuned LLaMA and Falcon models, we introduce a Rule Matching Score (RuMS) metric and demonstrate that our fine-tuned models generate standards-compliant NIDS rules with 98.9% accuracy, while keeping memory usage minimal.
To align with sustainability goals, such as Net Zero Emissions, serverless providers must incorporate energy considerations into their resource management models. In edge computing environments, where resources are scarce and latency requirements are stringent, serverless frameworks like OpenFaaS and OpenWhisk commonly rely on Kubernetes for function allocation on edge nodes. However, these frameworks often overlook energy consumption as a critical decision factor, which leads to increased energy costs and higher operational expenses (OPEX). To address this gap, we introduce Go-Green-Go-Cheap (3GC) approach-a deadline-aware and cost-effective resource allocation strategy tailored for serverless service providers to minimize energy and execution costs. 3GC enables real-time resource allocation and leverages the per-core Dynamic Voltage and Frequency Scaling (DVFS) feature to finely tune the balance between execution time and energy consumption. Our evaluation demonstrates that 3GC surpasses existing allocation techniques, achieving cost savings of 39.35% up to 69.43%, while consistently meeting function latency requirements.
Intrusion Detection Systems (IDS) play a vital role in safeguarding networks, yet conventional approaches that rely on a single data source often fall short against advanced threats, particularly in dynamic, containerized environments. To overcome this limitation, we propose Multidimensional Intrusion Detection System (MIDS), which fuses network flows with container-level features to provide a unified, holistic view of cluster activity. To enable evaluation, we generated two novel datasets by simulating common attacks—including DoS, brute force, and SQL injection—on Kubernetes-deployed applications (DVWA and Bank of Anthos). Experimental results using SVM, XGBoost, and DNN demonstrate that MIDS improves F1 scores by up to 8.69% over network-only data and 30.07% over container-only data. Feature analysis further confirms the complementary value of integrating multiple data dimensions, underscoring the effectiveness of MIDS for intrusion detection in containerized systems.
Network Function Virtualization (NFV), combined with Software-Defined Networking (SDN) and Cloud computing, enables flexible deployment of software-based Virtualized Network Functions (VNFs) without the need for dedicated hardware. With the rise of the Internet of Things (IoT), strict latency and bandwidth demands have driven the adoption of Edge computing to complement the Cloud. A key challenge in such hybrid infrastructures is efficiently allocating scarce Edge and abundant Cloud resources while reducing bandwidth costs. To address this, we propose a 0–1 Integer Linear Program (0–1 ILP) for VNF placement and chaining with location constraints and multiple conflicting objectives. Given the computational intractability of the ILP, we introduce polynomial-time meta-heuristics—including Genetic Algorithm, Chemical Reaction Optimization, Tabu Search, and Simulated Annealing. Experimental results demonstrate their effectiveness in optimizing resource utilization, service acceptance rate, and execution time.