[Purpose/Significance] The rapid proliferation of generative artificial intelligence (AI), exemplified by models like DeepSeek-R1, has precipitated a paradigm shift across various sectors, positioning AI literacy as an indispensable competency for the future workforce. University students, as digital natives and pivotal agents of technological adoption and innovation, stand at the forefront of this transformation. Their proficiency in understanding, utilizing, and critically evaluating AI technologies directly influences their academic performance, research capabilities, and long-term career adaptability. Although existing literature has begun to explore the conceptual landscape of AI literacy, a significant gap remains. There is an absence of a robust, empirically validated competency framework specifically tailored to the unique learning contexts, developmental needs, and future roles of university students within China's higher education system. This study aims to address this critical gap by constructing and validating a comprehensive AI literacy competency framework for college students. Its primary significance lies in its ability to move beyond theoretical discourse and provide an evidence-based model that can guide the systematical development of targeted training programs. This enriches the theoretical underpinnings of AI literacy education and offers practical guidance for cultivating high-quality talent equipped for the intelligent era. [Method/Process] This research employed a mixed-methods approach, integrating qualitative and quantitative methods to provide both theoretical grounding and empirical robustness. The study commenced with a qualitative phase utilizing the grounded theory methodology. A systematic analysis of 112 core academic publications (2019-2024) from databases such as CNKI and Web of Science was conducted. Through a rigorous process of open coding, axial coding, and selective coding, facilitated by NVivo11 software, we extracted 300 initial concepts, which were subsequently synthesized into 26 sub-categories and ultimately 4 main categories. This process resulted in the preliminary construction of a four-dimensional AI literacy competency framework. Following this, a quantitative phase was implemented to test and refine the framework. A detailed questionnaire was developed based on the identified dimensions and indicators. Utilizing a five-point Likert scale, the questionnaire measured 26 variables corresponding to the framework's sub-components. A total of 586 valid responses were collected from undergraduate students across universities in Jiangsu Province, China. The dataset was randomly split into two halves. The first subset (N=293) underwent exploratory factor analysis (EFA) using SPSS to uncover the underlying factor structure and assess the internal consistency reliability via Cronbach's alpha. The second subset (N=293) was subjected to confirmatory factor analysis (CFA) using AMOS to verify the hypothesized factor structure, evaluate model fit indices (e.g., CMIN/DF, CFI, TLI, RMSEA), and establish convergent and discriminant validity by examining average variance extracted (AVE) and composite reliability (CR). [Results/Conclusions] The empirical analyses strongly support the validity and reliability of the proposed competency framework. The EFA clearly identified four distinct factors that aligned perfectly with the predefined dimensions, with a total variance explained of 69.916% and all factor loadings exceeding 0.6. The CFA results demonstrated excellent model fit (CMIN/DF=1.921, CFI=0.950, TLI=0.943, RMSEA=0.056), confirming the structural integrity of the framework. Furthermore, all constructs exhibited high internal consistency (Cronbach's α>0.90) and satisfactory convergent (AVE>0.5, CR>0.7) and discriminant validity. The finalized framework, therefore, comprises four interconnected core dimensions: AI Cognition (encompassing knowledge of basic concepts, applications, value, and risks), AI Skills (covering practical abilities from tool usage and programming to critical evaluation and innovation), AI Ethics (emphasizing social responsibility, privacy, intellectual property, and legal compliance), and AI Thinking (fostering higher-order cognitive abilities like computational, critical, and systemic thinking). Based on this validated framework, the study proposes a systematic and multi-faceted training system. This system outlines clear training objectives, identifies key stakeholders (e.g., university libraries, teaching centers, schools, and external enterprises), designs layered training content and pathways corresponding to each dimension, and suggests implementation strategies focusing on faculty development, a comprehensive assessment and feedback mechanism, and the strategic integration of AI-related resources. The main limitation of this study is that the respondents of the questionnaire were primarily college students during the empirical test stage. Future research can include teachers, business employers, and AI experts to modify and improve the index weight and content of the competency framework from multiple perspectives. This can be done through the Delphi method, expert interviews, and other methods, so as to enhance the framework's authority and universality.