TAG-LLM Powered by Conditional Task-Specific Attention Mechanisms

Main Article Content

Rashi Khubnani, Ishika Ahuja, Smita Nair, Shalu Saxen

Abstract

The increasing use of large language models (LLMs) has raised concerns about unrestricted access to sensitive and potentially harmful information, particularly among minors. This study explores the implementation of a role-based control system for LLMs to address these concerns by linking all data within LLMs to specific access tags. These tags control which information is available based on the user's role or profile. For example, users without access to specific tags will not receive responses derived from those datasets. This approach also allows parents to lock certain resources for their children. Through an analysis of existing security mechanisms and content control strategies, this paper evaluates how role-based tagging can enhance information security while maintaining LLM functionality. The findings suggest that a well-designed tagging system can serve as a robust solution for safeguarding sensitive information, ensuring responsible LLM usage across different age groups and roles.

Article Details

Section
Articles