Most Read
  • Published in last 1 year
  • In last 2 years
  • In last 3 years
  • All

Please wait a minute...
  • Select all
    |
  • SHEN Hongjie, SHEN Hongwei, WANG Junli
    Journal of library and information science in agriculture. 2025, 37(7): 50-60. https://doi.org/10.13998/j.cnki.issn1002-1248.25-0231

    [Purpose/Significance] In the digital era, information literacy has evolved from an academic skill into a fundamental competency that is essential for civic participation and lifelong learning. Traditional information literacy education in digital libraries is faced with significant challenges including the need for standardized content delivery, limited interactivity, high development costs, and insufficient user engagement. The rapid advancement of generative artificial intelligence (GenAI) technologies presents an unprecedented opportunity to transform information literacy education by leveraging powerful capabilities in natural language processing, personalized interaction, and content generation. This study represents a pioneering systematic exploration of how GenAI can be strategically integrated into digital library information literacy education, It addresses a critical gap in existing research, which primarily focuses on general educational applications rather than library-specific contexts. The research strengthens the theoretical basis of AI-enhanced library education and offers practical advice to institutions adopting innovative educational technologies while upholding quality and ethical standards. [Method/Process] This study employs a comprehensive mixed-method approach combining systematic literature review, theoretical analysis, and conceptual framework development. The methodology is grounded in well-established information literacy frameworks, particularly the ACRL Framework, which provides a foundation for breaking down information literacy education into five key components: information need identification, retrieval strategy development, resource evaluation, information management, and ethics education. A four-dimensional challenge analysis framework was constructed encompassing content quality and credibility, pedagogical methods and learning outcomes, ethics and social equity, and operational considerations. The research synthesizes evidence from emerging AI-enhanced education practices, preliminary library applications, and educational technology literature to develop comprehensive application pathways and strategic responses. [Results/Conclusions] The research identifies specific GenAI integration pathways across the complete information literacy process. Applications include intelligent dialogue guidance for need identification, simulated training environments for retrieval skills, controlled assessment materials for evaluation practice, and interactive ethical scenario simulations. Four primary challenge categories are revealed: content quality issues including AI hallucination and embedded biases; pedagogical challenges such as over-dependence risks and assessment complexity; ethical concerns encompassing data privacy and algorithmic discrimination; and operational challenges including implementation costs and staff capability requirements. Strategic responses include human-AI collaborative review mechanisms, process-oriented task design emphasizing critical thinking, transparent ethical governance frameworks, and comprehensive staff development initiatives. The study emphasizes librarian role transformation toward learning facilitators, AI literacy educators, and ethics advocates. Despite contributions, limitations include reliance on theoretical analysis rather than empirical validation and insufficient attention to user group heterogeneity. To ensure equitable and effective AI-enhanced information literacy education, future research should prioritize empirical outcome studies, case studies of pioneering implementations, and development of library-specific AI tools.

  • DAI Xinwei, LI Feng
    Journal of library and information science in agriculture. 2025, 37(5): 86-101. https://doi.org/10.13998/j.cnki.issn1002-1248.25-0148

    [Purpose/Significance] Amid the global wave of digital transformation in education, artificial intelligence (AI) has emerged as a driving force behind Japanese educational reform, propelling the country's education system toward an "AI+" model. The "Approved Program for Mathematics,Data science and AI Smart Higher Education" (MDASH), led by the Japanese Ministry of Education, Culture, Sports, Science and Technology (MEXT), outlines a comprehensive framework for designing and implementing AI literacy (AIL) education in Japanese universities. MDASH not only reflects the Japanese strategic response to the AI-driven future, but also provides valuable theoretical references and practical guidance for enhancing AIL education in China. This study provides a detailed analysis of the "MDASH literacy-level" (MDASHL) curriculum model design, paying a particular attention to the model's modules and the mechanisms of interaction between them. It also examines the theoretical references from MDASHL review system to the AIL framework studies. The study proposes innovative implementation strategies for AIL education from unique perspectives, especially the "industry-academia integration" aspect. [Method/Process] Using internet research and literature analysis, starting with an exploration of Japanese national AI policy landscape, the study traces the evolution of Japanese AI policies and the contextual origins of the MDASH. It describes the objectives and philosophy of Japanese AIL education and delves into the theoretical underpinnings of the MDASHL curriculum model based on the mapping relationship between indicators of AIL frameworks and the components of the MDASHL review system. We select Hokuriku University, Wakayama University, Chiba University, and Kansai Univerisity as samples because they were approved by MDASHL and demonstrated exemplary effects. We introduce their subject curriculum design and specific teaching initiatives, identify the commonalities and unique characteristics of their AIL education, and further elaborate on their specific educational implementation pathways. [Results/Conclusions] The findings indicate that the Japanese MDASHL curriculum model is deeply rooted in the AIL frameworks. It summarizes five educational directions for Japanese AI literacy education: recognition, realization, comprehension, ethics, and practical operation. By comparing the current status of AIL education in China and Japan, the study found that Japanese AIL education has achieved rapid responsiveness and systematic development under the unified coordination of MEXT. It suggests that Japanese AI literacy education strategies have localized value, from which beneficial insights can be drawn in three areas: strategic planning, curriculum design, and industry-academia integration. These strategies provide innovative solutions for developing AIL education systems in Chinese universities. However, this study acknowledges limitations in the sample size. To comprehensively capture the full landscape of Japanese AIL education development, future research should expand the sample size, summarize its patterns and characteristics more thoroughly, and enhance the persuasiveness and generalizability of the findings.

  • LIU Wei, ZHANG Lei, JI Ting, CHEN Xiaoyang
    Journal of library and information science in agriculture. 2025, 37(5): 15-26. https://doi.org/10.13998/j.cnki.issn1002-1248.25-0379

    [Purpose/Significance] In the era of cloud computing, the Library Services Platform (LSP) failed to become a unified solution for libraries it promised to be. Now, it faces new development bottlenecks in the era of smart libraries. Its relatively rigid architecture, isolated data models, and limited intelligence level make it difficult to meet modern users' urgent demands for access to new resource ecosystems and proactive services. This limitation stems from the fact that existing LSPs are rooted in a resource management design philosophy. They lack native support for intelligence, personalization, and ecosystem integration, which hinders their ability to serve as a core component in the construction of smart libraries. [Method/Process] The rapid development of large language model (LLM) technology is promoting libraries to transition from digital intelligent phases into a new era of intelligent services. As AI agents are increasingly emerge as a core strategy for LLM applications, this paper proposes a next-generation LSP architecture called A-LSP, which is agent-oriented. The core of A-LSP consists of a three-layer logical model. 1) Layer 1: Compatibility & Tools - MCP Marketplace, serving as the foundation of the platform, this layer bridges the agent ecosystem with the external world. It transforms existing heterogeneous library systems (including legacy LSPs) and external tools into invocable "capability units" for agents through standardized protocols. 2) Layer 2: Orchestration & Intelligence-Agent Middleware. Functioning as the platform's "operating system" and "brain," this layer handles agent lifecycle management, task planning and decomposition, state and memory maintenance, and most crucially, the coordination of multi-agent collaboration. 3) Layer 3: Application & Ecosystem - Agent Marketplace. This functional layer serves users and developers, where various reusable agents encapsulating specific business logic are published, discovered, combined, and invoked, creating a rich application ecosystem. This architecture enables the implementation of new platform strategies without replacing legacy systems, establishing a modern technological platform with endogenous intelligence, inclusive compatibility, and an open ecosystem. This agent-based library service platform can be seen as a significant upgrade to existing LSPs, it drives their transformation from resource management-centric to agent service-centric, establishing itself as the library service platform for the AI era. [Results/Conclusions] Moreover, this paper puts forward a "Five Centers" construction demand framework for future libraries, namely, the Smart Resource Center, Smart Service Center, Smart Learning Center, Smart Scholarly Communication Center, and Smart Cultural Heritage Center, to build a blueprint for the integration of library technology and business. For each center, it delineates a representative complex application scenario and analyzes the underlying multi-agent collaboration processes, thereby clearly demonstrating A-LSP's deep integration with each center's operational logic and illuminating its profound impact on future library service models.

  • ZHANG Weichong, XU Chen, ZHU Yiran
    Journal of library and information science in agriculture. 2025, 37(5): 72-85. https://doi.org/10.13998/j.cnki.issn1002-1248.25-0325

    [Purpose/Significance] As digital government accelerates, the artificial intelligence (AI) literacy of grassroots civil servants has become critical to promoting smart government management. Grassroots-level civil servants who possess high levels of digital and AI literacy are indispensable drivers in establishing a digital and smart government. However, significant differences among grassroots civil servants in AI literacy and digital skills adaptation make it difficult for them to fully adapt to the requirements of smart government management. To effectively apply AI technologies in grassroots governance, it is essential for the effective application of AI technologies in grassroots governance to systematically identify its influencing factors and propose targeted cultivation paths, thereby improving public service quality and governance efficiency. [Method/Process] This study integrates the Technology Acceptance Model (TAM) and Innovation Diffusion Theory (IDT) to construct a TAM-IDT analytical framework. Based on empirical research identifying the AI literacy deficiencies of current grassroots civil servants, the TAM-IDT analytical framework systematically examines the impact mechanisms of key variables, perceived usefulness, perceived ease of use, and behavioral attitude, on AI literacy. The framework also proposes stage-based and group-specific cultivation strategies. The study uses local government civil servants as its research sample. It collects data through questionnaires and interviews, and employs structural equation modeling and mediation effect analysis for empirical validation. [Results/Conclusions] The findings reveal that behavioral attitude has a significant positive impact on AI literacy. Perceived usefulness notably enhances behavioral intention, while perceived ease of use has a negative effect on behavioral attitude, suggesting that individuals who perceive greater difficulty may be more motivated to learn. However, one of the highlights of this study is that civil servants who are proficient in AI technology or have used it in their work have a lower desire to learn more about it. Further analysis shows that perceived ease of use positively influences behavioral attitude indirectly through perceived usefulness. Additionally, both cognitive variables indirectly affect AI literacy via behavioral attitude, forming a "cognition-intention-behavior" influence chain. Based on these results and the classification of stages and types of technology adoption using Innovation Diffusion Theory (IDT), a three-dimensional, differentiated AI literacy cultivation strategy called "perception diffusion collaboration" was proposed. This strategy is based on the five elements, stages, and the groups of people involved in innovation diffusion. It offers a theoretical foundation and practical path for improving AI literacy among grassroots civil servants and advancing the modernization of grassroots governance.

  • ZHAO Yajing
    Journal of library and information science in agriculture. 2025, 37(6): 70-86. https://doi.org/10.13998/j.cnki.issn1002-1248.25-0290

    [Purpose/Significance] This study focuses on the participation behavior of users prone to depression who participate in user-generated content (UGC) platforms, aiming to explore their behavioral heterogeneity and the underlying influencing mechanisms. The research aims to expand the theoretical scope of studies on user behavior while providing UGC platforms with practical guidance on building differentiated user care models and refining operational strategies. By utilizing authentic user-generated content as the data foundation, this study addresses the representational limitations commonly associated with traditional small-sample approaches, such as surveys and interviews. It introduces a data-driven perspective and methodological innovation to the field of information behavior research. Furthermore, this study enhances the understanding of varying psychological and behavioral needs among different types of depression-prone users. The findings can assist platforms in optimizing user experience, improving emotional support systems within online communities, and informing the development of more targeted and responsive intervention strategies. [Method/Process] First, web scraping techniques were used to collect a large volume of depression-related posts from the Xiaohongshu platform as the primary data source. Second, representative keywords were extracted through Word2Vec and K-means clustering algorithms. A keyword co-occurrence network was then constructed using the Leiden clustering algorithm to identify semantic relationships. By integrating user attribute information, the study achieved a fine-grained classification of heterogeneous depression-prone user groups. Third, drawing on self-determination theory (SDT) and the technology acceptance model (TAM), and leveraging BERTopic for advanced topic modeling, the study constructed a comprehensive factor model to examine the mechanisms influencing user participation behavior in depth. [Results/Conclusions] The research identifies three distinct types of depression-prone users: adolescent depression expression, help-seeking expression, and emotional breakdown expression. Results indicate that posting and commenting behaviors across these groups are primarily driven by emotional needs and environmental factors. Emotional needs are the dominant motivator for active participation, while environmental influences significantly contribute to triggering interaction, especially within comment sections. Additionally, adolescent depression expression and emotional breakdown expression show stronger tendencies toward self-related needs, reflecting deeper emotional and identity concerns. In contrast, help-seeking expression exhibit more evident competence-related needs, focusing on practical advice and problem-solving. Although competence and technical factors account for a smaller proportion, they still play a meaningful supporting role in shaping the structure and substance of user participation behavior on UGC platforms.

  • SHI Xujie, YUAN Fan, LI Jia
    Journal of library and information science in agriculture. 2025, 37(5): 40-57. https://doi.org/10.13998/j.cnki.issn1002-1248.25-0274

    [Purpose/Significance] This paper investigates how generative artificial intelligence (GenAI) is reshaping the Searching as Learning (SAL) paradigm, focusing on its implications, challenges, and prospects in Library and Information Science (LIS). Traditional SAL emphasizes the cognitive and metacognitive processes by which users acquire and construct knowledge through information retrieval. However, the advent of GenAI - especially large language models (LLMs) - introduces a transformative shift from keyword-based querying to dynamic, dialogic, and multimodal interactions. This study aims to clarify the conceptual and practical significance of GenAI-driven SAL, explore its technical trajectories, and evaluate its impact on learners' behavior, learning strategies, and information literacy. It also highlights the emerging ethical and epistemological challenges posed by GenAI systems in learning-oriented search contexts. [Method/Process] Using the PRISMA-ScR framework, this study conducted a scoping review of academic and gray literature published between January 2023 and May 2025. A total of 1 681 records were retrieved from major academic databases and preprint repositories. After screening titles, abstracts, and full texts, 22 studies were selected for in-depth qualitative analysis. Thematic coding and synthesis were conducted to extract recurring patterns and theoretical insights across three key dimensions: GenAI-enhanced search technologies, evolving user behaviors in SAL contexts, and normative concerns associated with credibility, agency, and transparency. The analysis was grounded in LIS theories, including information behavior, metacognitive models of learning, and digital/information literacy frameworks. [Results/Conclusions] The results reveal that GenAI is fundamentally reshaping SAL in three key areas. First, in terms of technology, GenAI systems (e.g., GPT-based chat interfaces) provide conversational, context-aware, and multimodal assistance, transforming SAL from reactive searching to proactive co-learning. These systems scaffold learning through adaptive query reformulation, real-time content summarization, and source triangulations supporting iterative reflection and cognitive engagement. Such affordances mirror the functions traditionally associated with human tutors, thereby expanding learners' capacity for critical inquiry and self-directed exploration. Second, user behaviors in SAL are undergoing a paradigm shift. Learners increasingly engage in human-AI co-construction of knowledge, participating in iterative query-dialogue loops that facilitate concept clarification and knowledge synthesis. While this enhances engagement, personalization, and perceived learning efficiency, it also raises concerns. Over-reliance on AI-generated content may undermine learners' critical thinking, reduce information discernment, and promote passive consumption. The study identifies a dual effect. While GenAI augments higher-order thinking and strategic learning, it can also lead to superficial comprehension when learners lack the skills to critically evaluate AI output. Third, the review underscores the urgency of addressing ethical and pedagogical challenges. Issues such as AI hallucination, algorithmic opacity, and biased content threaten the credibility of GenAI-enhanced learning environments. From an LIS perspective, this necessitates a reconfiguration of information literacy education to include AI literacy. Students must be equipped not only to retrieve and evaluate information, but also to interrogate algorithmic sources, verify provenance, and triangulate AI outputs with authoritative references. GenAI should be positioned as a cognitive assistant, not a definitive knowledge authority. GenAI holds substantial promise in enhancing SAL through greater interactivity, personalization, and cognitive scaffolding. However, these benefits must be balanced with informed practices that mitigate risks to learner autonomy, critical reasoning, and information ethics. This work establishes an analytical foundation for future research and practices at the intersection of AI, learning, and information behavior.

  • LI Xinxin, MA Yumeng, JU Zihan, WANG Jing
    Journal of library and information science in agriculture. 2025, 37(10): 53-66. https://doi.org/10.13998/j.cnki.issn1002-1248.25-0396

    [Purpose/Significance] In recent years, the rapid rise of large language model technology has shown significant advantages in understanding semantic context and capturing multidimensional sentiment tendencies. This study explores an aspect-level sentiment analysis method for science and technology policy comments based on large language models, aiming to uncover latent knowledge within these texts and provide data support for evaluating the effectiveness and subsequent optimization of policies. [Method/Process] Taking the electric vehicle industry as an example, a burgeoning sector vital to achieving the "dual carbon" goals and promoting green low-carbon development, this study proposed a policy satisfaction evaluation model. The model uses large language models for fine-grained aspect-level sentiment analysis of policy comment texts. The process includes the following steps: 1) Data collection and preprocessing: Comments related to electric vehicle policies were collected from the "Interactive Topics" section of the "Autohome" website using Python. Deep learning techniques were applied to set rules for the comment texts and automatically add punctuation marks to Chinese texts for data pre-processing. 2) Aspect word extraction: The steps include text tokenization, determining a candidate aspect word set, expanding the aspect word set, and clustering aspect words. A total of 3 405 aspect words were extracted from 35 000 comments, forming six clusters: infrastructure construction, vehicle performance configuration, national policies, technological development, automotive safety, and automotive sales market. Aspect-level sentences were extracted using aspect words and punctuation information, with a subset of sentences manually labeled to build training and validation corpora, resulting in 14 911 aspect-level sentences. 3) Sentiment tendency recognition model training: A prompt template for aspect-level sentiment classification tasks was designed, and the LoRA method was used to fine-tune the large language model with the manually labeled training set. The model's performance was evaluated using a validation set, resulting in the classification of comments on electric vehicle policies into positive, neutral, and negative sentiments. 4) Comparative experiment: The fine-tuned large model was compared with the mainstream sentiment classification model, BERT, to assess the performance of different models in aspect-level sentiment classification tasks. [Results/Conclusions] The results show that compared to the BERT model, the proposed method outperformed other methods in multiple metrics, including accuracy, recall, and F1 score, with improvements of 11.49%, 12.43% and 11.43%, respectively. Overall, public attention is higher towards vehicle performance configuration and automotive sales market, while infrastructure construction receives the lowest attention. The overall public satisfaction with electric vehicles is relatively low, with negative comments outweighing positive comments across all aspects, consistent with the "negative bias" theory in social psychology. Satisfaction issues are particularly prominent in the areas of automotive safety and infrastructure construction. Finally, policy recommendations have been proposed to optimize electric vehicle subsidy policies, strengthen policy promotion, improve infrastructure construction, and enhance after-sales service support systems.

  • ZHANG Tao, WU Sihang
    Journal of library and information science in agriculture. 2025, 37(7): 91-105. https://doi.org/10.13998/j.cnki.issn1002-1248.25-0399

    [Purpose/Significance] This study addresses the "motivation black box" problem. By integrating achievement goal theory and technology acceptance models, it aims to construct a four-dimensional "motivation-identity-cognition-engagement" theoretical framework to analyze the driving mechanisms underlying AI teaching assistant usage behavior. [Method/Process] A questionnaire survey was utilized in this study. The Chaoxing Learning platform served as the research context, and college students who use AI teaching assistants constitute the research subjects. The chain mediating effect between technical identity recognition and technical acceptance was tested using the structural equation modeling (SEM). The significance of the pathways was verified via the Bootstrap sampling method. Data analysis was performed using SPSS 26.0 and Smart PLS 3.3.9 software. [Results/Conclusions] Key findings reveal that within the learning environment integrating Chaoxing's online courses with AI teaching assistants, achievement goal orientations demonstrated significant divergence, with mastery-approach goals (MAP) emerging as the sole significant driver - other goal orientations showed no statistically reliable predictive effects. Crucially, MAP significantly promoted dependent (β=0.308), critical (β=0.262), and exploratory (β=0.244) usage behaviors through the "technology identity recognition → technology acceptance" chain-mediation pathway. Furthermore, technology identity recognition exhibited dual mediation dominance in behavior formation, as this chain-mediation pathway accounted for more than 50% of total effects across all three usage behaviors, particularly for dependent and exploratory usage. Notably, technology identity recognition demonstrated the strongest mediation effect specifically on dependent behaviors (β=0.418). Further analysis indicates MAP's total effect on technology identity recognition substantially exceeded its direct effect on technology acceptance. This critical finding aligns with Deci and Ryan's self-determination theory, confirming that intrinsic motivation (exemplified by MAP) facilitates deeper skill internalization. Specifically, students focused on competence development showed greater tendency to integrate AI skills into their self-concept (e.g., perceiving themselves as "technology-proficient learners") rather than viewing them merely as external tools - a mechanism that empirically explains why traditional technical training that emphasizes operational skills often fails to foster sustained usage. Most significantly, this research provides important implications for educators in guiding students' use of AI teaching assistants: they should prioritize cultivating students' mastery-approach goals (MAP) through instructional design that strengthens students' pursuit of knowledge. Such an approach enhances the effectiveness of AI tools in teaching while simultaneously offering direction for the Chaoxing Learning Platform to optimize its AI teaching assistant features. Specifically, the platform should enhance personalized learning support tailored to the needs of MAP-oriented users, thereby better aligning with students' intrinsic learning motivations.

  • XUE Qian, ZHAO Hong, REN Fubing
    Journal of library and information science in agriculture. 2025, 37(10): 78-95. https://doi.org/10.13998/j.cnki.issn1002-1248.25-0368

    [Purpose/Significance] Science and technology have emerged as pivotal domains of competition between China and the United States. This article provides a quantitative analysis of US technology think tanks reports on the electronic information research and industry, with a focus on the evolution of themes and topics over the past decade. This analysis not only reflects their technological priorities but also maps their analytical focus on China, providing decision-making support for China's think tanks development and strategic response. [Method/Process] Based on the "2020 Global Go to Think Tank Index Report" released by the Think Tanks and Civil Societies Program (TTCSP) at the University of Pennsylvania, considering factors such as think tank authority, research topic relevance, and research continuity, we collected a total of 1 360 reports on the electronic information research and industry published between 2015 and 2024 by 8 leading US technology think tanks. Topic analysis was conducted with BERTopic, a topic modeling tool based on Transformer embeddings. The methodology involved several key steps. First, text cleaning was performed using NLTK tools; then, the all-MiniLM-L6-v2 model was employed to generate high-dimensional document embedding vectors. Subsequently, dimensionality reduction was achieved through the UMAP algorithm, followed by density clustering using the HDBSCAN algorithm. Finally, topic words were extracted based on the c-TF-IDF algorithm. [Results/Conclusions] The research identified 31 distinct research themes, of which 6 were directly related to China, specifically: global semiconductor industry competition, Sino-US digital policies and cloud computing competition, 5G network and technology competition, Chinese AI investment, Sino-US science and innovation policies, and Sino-US military technology competition. These 31 research themes were hierarchically clustered using HDBSCAN and could be categorized into 11 major research directions. The US technology think tanks persistently focused on 11 major research directions, which were largely concentrated on key areas of electronic information research and industry, such as semiconductors and microelectronics, artificial intelligence, wireless communication, quantum information technology, network security, and big data. The evolutionary trends across these research directions were generally consistent, with military technology and network security receiving the highest level of attention. The attention attached to China has undergone a significant strategic shift over the years, with drastic increase in semiconductor export control, AI technology and Sino-US digital competition. Based on the identified key themes and topic words, it is highly recommended to establish an evolutionary mapping of China-related topics and to develop a dynamic monitoring and early warning mechanism for technology issues concerning China. Future research could incorporate larger-scale corpus resources and more advanced large language models to continuously optimize topic modeling effectiveness.

  • SONGYaping, LIANKangping
    Journal of library and information science in agriculture. 2025, 37(8): 92-103. https://doi.org/10.13998/j.cnki.issn1002-1248.25-0402

    [Purpose/Significance] Study tours, as a dynamic integration of education and tourism, represent a significant opportunity for public libraries to innovate their service models and enhance their cultural and educational roles within the framework of cultural tourism integration. This study explores the pathways for public libraries to develop high-quality study tours, which addresses the growing demand for diverse, high-quality social education in the context of China's "cultural confidence" and "national reading" initiatives. Unlike prior studies that focus narrowly on specific library practices or regional cases, this research provides a comprehensive analysis by integrating domestic and international perspectives, we emphasize the strategic role of public libraries in cultural resource transformation. This paper contributes to the academic discourse by proposing actionable frameworks for service innovation. These frameworks position public libraries as pivotal players in advancing cultural education and addressing contemporary societal needs through interdisciplinary collaboration and resource optimization. [Method/Process] Adopting a cultural tourism integration approach, this study employs a multi-method approach that combines a literature review, a comparative case analysis, and an empirical survey to examine the current state of, and challenges to, public library study tour services. The research establishes a theoretical foundation by drawing on policy documents, industry reports, and academic literature to establish a theoretical foundation, and identifies best practices and gaps by analyzing representative domestic and international library cases. Domestic cases span provincial, municipal, and county-level libraries, covering diverse regions and service models, such as digital innovation and regional cultural integration. International cases include libraries in North America, Europe, and Asia, highlighting technological applications and cross-sector collaboration. The comparative analysis focuses on cooperation models, course design, and service mechanisms, supported by empirical data from user feedback and activity records. This approach ensures a robust understanding of practical challenges and opportunities, grounded in both national policy contexts and global experiences. [Results/Conclusions] The study identified key challenges in public library study tour services, including insufficient resource integration, severe homogenization of service formats, lack of evaluation systems, and weak cross-sector collaboration mechanisms. To address these issues, five strategic pathways have been proposed: establishing multi-party collaboration frameworks involving government, schools, and social organizations; creating expert talent pools to enhance service professionalism; developing standardized service guidelines to improve consistency and quality; deepening thematic content by leveraging library collections; and implementing comprehensive feedback and evaluation systems to ensure continuous improvement. These strategies enable public libraries to create distinctive, culturally rich study tour programs that align with regional identities and educational goals. However, there are limitations, such as the scalability of resource-intensive models and the need for ongoing funding. Future research could explore the integration of digital technology, such as AI-driven evaluation systems, and cross-regional collaboration to enhance scalability and inclusivity of public libraries, thereby advancing their role in cultural tourism integration and social education.

  • CHENNan
    Journal of library and information science in agriculture. 2025, 37(12): 64-80. https://doi.org/10.13998/j.cnki.issn1002-1248.25-0427

    [Purpose/Significance] With the rapid development of technologies such as artificial intelligence, big data, and cloud computing, digital-intelligent technologies are profoundly revolutionizing the service models and management frameworks of public libraries. This study is based on the development background of the digital-intelligent era under the 15th Five-Year Plan. It investigates the smart library services of theNational Library, the Hong Kong Central Library, the Macao Central Library, libraries in theTaiwan region, and 31 provincial-level public libraries across China. The analysis focuses on the current research progress in smart library services provided by public libraries, examining both service content and methods. [Method/Process] This research employed a comparative analysis method, comparing the smart library services of 31 provincial-level public libraries in China with those in Hong Kong, Macao, and the Taiwan region to identify regional differences and development gaps. The investigation reveals that the development of smart library services in public libraries in China exhibits significant regional imbalances. Public libraries in economically developed regions demonstrate a significantly higher level of smart library services compared to those in less developed areas. / [ResultsConclusions] Based on the findings, this study proposes development strategies for smart library services in public libraries within the digital-intelligent environment. These strategies include building an intelligent technology management system, establishing tiered smart service standards, cultivating a multidisciplinary team of smart librarians, creating an inclusive smart service system, developing an integrated smart resource platform, designing blended physical-virtual smart service spaces, and fostering collaborative innovation in smart service alliances. The challenges faced and the experiences gained by public libraries during the "14th Five-Year Plan" period provide critical insights for the formulation of the "15th Five-Year Plan," while also representing core issues that must be acknowledged and addressed in the journey of the "15th Five-Year Plan." This necessitates the development of scientific and effective strategies by public libraries, which is also a key task of the "15th Five-Year Plan." As a pivotal phase for the innovative development of public libraries, the "15th Five-Year Plan" period should actively implement national policies, with each library formulating development strategies and specific measures for smart library services based on the needs of public cultural development and their own practical circumstances. Grounded in the context of the "15th Five-Year Plan" and building upon the current state of smart library services in provincial-level public libraries during the "14th Five-Year Plan" period, this paper proposes strategies for smart library services in public libraries during the "15th Five-Year Plan" period in the digital-intelligent era, with the aim of contributing to the promotion and development of smart library services in public libraries nationwide.

  • LIGuihua
    Journal of library and information science in agriculture. 2025, 37(8): 40-49. https://doi.org/10.13998/j.cnki.issn1002-1248.25-0472

    [Purpose/Significance] The social application of technologies, such as generative artificial intelligence (AI), presents challenges for adolescents when it comes to deep reading. In response, China has promoted the Youth Reading Initiative and issued the "Notice on Further Implementing the National Youth Student Reading Action", which outlines five key projects and provides a clear roadmap for action. To thoroughly implement and effectively achieve the goals, tt is essential to clarify the specific path choices and their inherent rationality. vMethod/Process] This paper reviews China's decade-long initiatives to develop a youth reading ecosystem. It demonstrates that the nation has established a robust foundation to support "holistic reading" initiatives and prioritizes creating such environments as its strategic focus. The "Notice on Further Implementing the National Youth Student Reading Initiative" first mentioned both the concepts of "holistic reading" and "deep reading" simultaneously. Thus, in this paper, we first clarified four characteristics of holistic reading, and then analyzed the relationship between "holistic reading" and "deep reading" based on discussions about the real-world impact of the AI era on youth reading. We finally elucidated the logical and practical foundations for the formation of current youth reading promotion pathways in China. [Results/Conclusions] China's recent policies for youth reading initiatives demonstrate the nation's commitment to a "holistic reading" approach that encourages "deep reading" among adolescents. Emerging from the interplay between contemporary educational philosophies and evolving educational environments, this strategic choice signifies a return to the fundamental essence of reading. Cultivating reading can comprehensively enhance teenagers independent thinking, social responsibility, innovative spirit, and practical abilities. However, the development of deep reading skills among today's youth faces unprecedented challenges due to significant changes in media environments and knowledge acquisition methods. Therefore, in this era where technological environments profoundly reshape learning conditions, only by embracing the concept of "holistic reading" can teenagers develop the internal motivation needed to counteract the effects of the current information environment and cultivate their perseverance in deep reading. The progression from "holistic reading" to "deep reading" represents a significant shift from reading habits to reading competence. First, broadening one's reading scope lays the foundation for deep reading. Second, access to quality reading materials ensures effective outcomes of deep reading. Third, peer motivation cultivates the drive for deep reading. Fourth, promoting specialized reading creates societal momentum that propels deeper engagement. Finally, the paper posits that achieving this transformation necessitates coordinated efforts spanning various dimensions, including stakeholder engagement, goal-setting, and resource allocation.

  • XIAOQinghua
    Journal of library and information science in agriculture. 2025, 37(8): 50-60. https://doi.org/10.13998/j.cnki.issn1002-1248.25-0165

    [Purpose/Significance] This study aims to explore the reading difficulties experienced by children in intergenerational caregiving situations in rural China, analyze the causes of these difficulties, and propose targeted solutions. The research is motivated by the growing concerns about educational disparities and developmental challenges experienced by this vulnerable group, especially within the context of China's rural revitalization strategy. Unlike previous studies, which have primarily focused on the broader category of rural left-behind children, this paper focuses on a specific subgroup-rural children raised by their grandparents - to offer a more nuanced understanding of the unique obstacles these children face in relation to reading. This study contributes to both academic discourse on rural education and efforts aimed at promoting equitable development by identifying the structural and cultural factors that contribute to low reading literacy among these children. By integrating theories of family sociology, educational inequality, and digital divide, it fills a critical gap in existing literature and offers new insights into how intergenerational caregiving intersects with literacy development. [Method/Process] The research was conducted in a rural county located in Guangdong Province. A mixed-methods approach was adopted that combined in-depth interviews with caregivers and teachers, a textual analysis of local education policies, and online surveys of rural schools and community centers. A grounded theory approach was employed as the analytical framework, and a three-stage coding process was used to develop a measurement model for assessing individual reading barriers. This methodological rigor ensured that the findings were grounded in empirical data, yet still allowed for theoretical generalization. [Results/Conclusions] The findings reveal that rural children under intergenerational care face multiple reading challenges, including limited access to books, inadequate reading environments, and a lack of awareness about the importance of reading. These issues stem from complex sociostructural factors, including fragmented family structures, limited educational opportunities for grandparents, and imbalanced use of digital technologies. To address these challenges, the study proposes a multi-pronged intervention framework. This framework includes strengthening policy support for rural reading programs, mobilizing volunteers as reading mentors, guiding the appropriate use of digital tools to enhance literacy, and encouraging intergenerational reading activities within families. While this study provides valuable insights, further longitudinal and comparative research across diverse rural regions is needed to validate and expand upon these findings. Future studies could also examine the long-term impact of reading interventions on children's academic achievement and psychosocial development.

  • YE Zhifei, WU Zhenxin, LI Hanyu, WANG Ying
    Journal of library and information science in agriculture. 2025, 37(10): 67-77. https://doi.org/10.13998/j.cnki.issn1002-1248.25-0364

    [Purpose/Significance] The rapid advancement of digital infrastructure has precipitated a fundamental transformation in scholarly communication, characterized by an increasing reliance on online platforms. Preprint exchange, as a cornerstone of open science, offers researchers opportunities for immediate dissemination and collaborative engagement. However, the absence of rigorous peer review raises persistent concerns regarding research ethics, data integrity, and the reliability of scholarly outputs, which can undermine public confidence in preprint platforms. Addressing these challenges is essential not only for maintaining the integrity of academic discourse but also for fostering a transparent and trustworthy open science ecosystem. This research contributes to the existing scholarship by systematically examining the trust framework of preprint platforms, positioning itself at the intersection of library and information science and scholarly communication studies. In contrast to previous investigations that have focused predominantly on dissemination efficiency or platform functionality, this study emphasizes the structural dimensions of trustworthiness. It presents an innovative analytical framework that strengthens the theoretical foundations of academic information trust and provides practical strategies for enhancing the governance and legitimacy of preprint platforms. [Method/Process] To ensure both theoretical rigor and empirical depth, first, a comprehensive literature review was conducted to identify potential trust-related vulnerabilities in preprint platforms and to systematically delineate their credibility challenges. This review identified five critical factors influencing the credibility of preprint platforms: academic conflicts of interest, platform reliability, heterogeneous manuscript quality, information overload, and insufficient academic recognition. Drawing upon the DeLone & McLean (D&M) Information Systems Success Model and aligning with the ISO 16363 standard for trustworthy digital repositories, the study analyzed the structural components of trustworthiness through the dimensions of system quality, information quality, and service quality. Subsequently, in-depth case studies of prominent platforms, including arXiv and ChinaXiv, were undertaken to examine their governance architectures, operational methodologies, and practical implementations. This process culminated in evidence-based recommendations for enhancing platform trustworthiness. This integrated methodological framework not only synthesizes theoretical insights with empirical evidence but also ensures the scientific rigor, reliability, and practical applicability of the proposed trust model. [Results/Conclusions] Based on these findings, a three-dimensional trust framework was developed, encompassing system trustworthiness, information trustworthiness, and service trustworthiness. This framework transcends traditional quality control paradigms and offers novel perspectives for the standardized development of preprint platforms. The research further articulates pathways for establishing trustworthiness across three levels: 1) system trustworthiness, adhering to FAIR principles and implementing long-term preservation strategies to provide a stable institutional foundation; 2) information trustworthiness, establishing a comprehensive quality governance continuum that incorporates "pre-screening, dynamic identification, and post-peer review" mechanisms; and 3) service trustworthiness, delivering professional preprint services through collaborative governance models and journal coordination frameworks.While this framework provides a comprehensive analytical perspective, certain limitations should be acknowledged. This study's primary reliance on qualitative methods necessitates broader empirical validation. Furthermore, its focus was on platform functionalities rather than user perceptions. Consequently, future research can adopt a mixed-methods approach, incorporate user perception theories, and establish quantitative metrics for evaluating platform trustworthiness.

  • JIANG Yumeng
    Journal of library and information science in agriculture. 2025, 37(7): 19-34. https://doi.org/10.13998/j.cnki.issn1002-1248.25-0357

    [Purpose/Significance] The rapid advancement of artificial intelligence (AI) has fundamentally transformed academic research and information services. This makes AI literacy education a critical part of the strategy for academic libraries. As AI technologies become integrated into various aspects of scholarly activities, including literature searches, data analysis, academic writing and publishing, libraries must expand their traditional information literacy programs to include comprehensive AI competencies. This study focuses on analyzing AI literacy education practices in Nordic academic libraries, which are recognized for their progressive approaches to digital education and technology integration. By examining these international exemplars, the research aims to provide valuable references for academic libraries in China. The findings will help libraries develop systematic approaches to equip faculty and students with both technical AI skills and critical understanding of AI's ethical implications, ultimately supporting the cultivation of future-ready talents in the digital era. [Method/Process] This research employed a web-based survey methodology to investigate AI literacy education programs in 23 academic libraries across Nordic countries (Denmark, Finland, Norway, and Sweden). The study systematically analyzed four key dimensions of these programs: educational stakeholders (including libraries, faculty, and IT departments), target audiences (undergraduates, graduate students, researchers, and faculty), educational content (covering both technical skills and ethical considerations), and instructional formats (such as workshops, courses, and online modules). The selection of Nordic libraries as case studies was based on their established reputation in digital literacy education and early adoption of AI-related services. Data collection focused on publicly available information about each library's AI education initiatives. The analysis particularly emphasized how these libraries integrated AI literacy within their existing information literacy frameworks while addressing the specific needs of different user groups. [Results/Conclusions] The investigation revealed several effective practices in AI literacy education. First, successful programs typically involved collaboration among multiple stakeholders, with libraries working closely with academic departments, IT services, and sometimes external partners to develop comprehensive curricula. Second, the content was carefully designed to address different competency levels, from basic AI awareness for undergraduates to advanced applications for researchers. Third, most programs balanced technical instruction with critical discussions about ethical challenges such as algorithmic bias and data privacy. Fourth, diverse delivery methods were employed, including hands-on workshops, credit-bearing courses, and self-paced online modules, allowing for flexibility in learning. For Chinese academic libraries seeking to enhance their AI literacy offerings, these findings suggest several practical recommendations: establishing cross-departmental collaboration mechanisms to pool expertise and resources; developing tiered educational content that caters to users with varying needs and backgrounds; incorporating both technical training and ethical discussions into the curriculum; and adopting flexible teaching formats to maximize accessibility. Future development should focus on creating localized AI literacy frameworks that consider China's unique educational context and technological landscape, while maintaining international perspectives through continued dialogue with global peers.

  • QIAN Li, WANG Qianying, LIU Yi, ZHANG Yuanzhe, CHANG Zhijun
    Journal of library and information science in agriculture. 2025, 37(5): 5-14. https://doi.org/10.13998/j.cnki.issn1002-1248.25-0386

    [Purpose/Significance] Currently, large language models (LLMs) and agents have emerged as core technical paradigms in artificial intelligence, with their integration into scientific research scenarios holding profound significance for transforming research paradigms. Traditional scientific research is facing an increasing number of challenges such as inefficient literature searches, the processing of massive amounts of data, repetitive experimental tasks, and barriers to collaborative innovation. Agents, empowered by LLMs, offer a promising solution to these bottlenecks by enabling intelligent automation and adaptive collaboration across research workflows. Beyond basic task assistance, they play a pivotal role in facilitating knowledge fusion, accelerating breakthroughs in frontier areas, and reshaping traditional research models. This study aims to clarify the core techniques and applications of agents in scientific research, highlighting their transition from auxiliary tools to integral innovation partners, which is crucial for accelerating knowledge discovery, enhancing research efficiency, and promoting the shift toward intelligent and collaborative research models. [Method/Process] Employing an objective, inductive approach, this study thoroughly explains the core technical modules of agents including planning, perception, action, and memory, as well as the operational mechanisms of multi-agent collaboration. It also integrates an analysis of agent applications throughout the entire scientific research lifecycle. This analysis covers key scenarios including literature review and idea formulation, experimental planning and design, data processing and execution, result analysis and knowledge discovery, and research report composition. By analyzing the application value and existing limitations of agents, this study proposes prospects and recommendations for the application and development of agents in scientific research scenarios. [Results/Conclusions The findings reveal that LLM-driven agents are evolving from basic task executors to active participants in scientific discovery, demonstrating significant transformative potential throughout the entire research workflow. They facilitate more efficient information processing, smarter experimental design, and deeper knowledge integration, thereby redefining traditional research patterns. However, several challenges persist, including limitations in long-range reasoning capabilities, and underdeveloped ecosystem support. There are also ethical and security concerns, such as data privacy and academic integrity. To address these, future efforts should focus on strengthening intelligent computing infrastructure for scientific data, deepening collaborative development of domain-specific agents, establishing a unified open collaboration framework with standardized interfaces, and building "human-in-the-loop" hybrid systems and multiple evaluation mechanisms. These measures will enable agents to become core partners in scientific innovation, driving the transition of research paradigms toward greater intelligence and collaboration.

  • LIUYihan, CHUYuxia, ZHAIYujia
    Journal of library and information science in agriculture. 2025, 37(12): 20-35. https://doi.org/10.13998/j.cnki.issn1002-1248.25-0556

    [Purpose/Significance] Short video platforms have become the core arena for the digital presentation and dissemination of intangible cultural heritage (ICH). However, the "Matthew Effect" in the digital attention economy often causes high-quality ICH content to be submerged. Existing research predominantly suffers from "modal segmentation," focusing on single modalities such as text and visuals in isolation, which fails to explain how these elements synergistically drive user engagement. To address this gap, this study constructs a communication effect evaluation model based on multimodal machine learning. The innovation of this research lies in integrating computational communication methods with traditional persuasion theories, moving beyond simple content analysis to a quantifiable predictive framework. By identifying key influencing factors through data fusion, this study provides a scientific basis for optimizing the digital production strategies of the ICH content, offering significant value for enhancing the visibility of traditional culture and overcoming the barriers of digital dissemination. [Method/Process] This study integrates the elaboration likelihood model (ELM) and media ritual theory to establish a "cognitive-behavioral-cultural" dual-path analytical framework. Theoretically, the study maps content quality (video/audio/text) to the "Central Route" and source credibility (author attributes) to the "Peripheral Route." Empirically, focusing on ICH videos on Douyin as the subject, the study collected data from May 2024 to May 2025. After rigorous cleaning, a dataset of 2,869 valid samples was established. The study employs a multimodal feature engineering approach: visual and textual features are extracted to represent content quality; audio features (including FBank and MFCC) are processed using the OpenSMILE toolkit to capture prosodic and spectral characteristics; and author data are collected to quantify social influence. The Random Forest algorithm is utilized to fuse these heterogeneous data sources, analyze feature importance, and predict communication effectiveness. [Results/Conclusions] The empirical results demonstrate that the multimodal fusion model significantly outperforms single-modality approaches in predicting communication effects, confirming that ICH dissemination is a result of complex symbol interaction. Feature importance analysis reveals a distinct hierarchy: Author attributes make the highest contribution, indicating that the "Peripheral Route" - driven by the creator's social capital - is the decisive factor in determining communication heat. Its persuasive power far surpasses that of the content itself. Regarding content modalities, text and video follow in importance, serving as critical tools for user retention, while the audio modality holds supplementary semantic value by setting the emotional atmosphere. The study does not account for dynamic temporal changes or external trending events. Effective ICH dissemination requires a synergistic strategy: prioritizing the accumulation of the author's social influence as the core driver, while simultaneously optimizing visual and textual quality. Future research should incorporate time-series analysis to capture dynamic communication trends.

  • DONG Ke, SONG Yuchen, WU Jiachun
    Journal of library and information science in agriculture. 2025, 37(7): 4-18. https://doi.org/10.13998/j.cnki.issn1002-1248.25-0374

    [Purpose/Significance] The rapid development of artificial intelligence (AI) technology has reshaped the demand for data governance that is compliant, comprehensive, and refined. The European Union (EU) has proactively established a benchmark framework for AI data governance through targeted policy measures. However, there is a lack of systematic analysis on the policy layout and governance characteristics of AI data governance in the EU, both domestically and internationally. This paper focuses on the AI data governance policies in the EU, aiming to reveal the development process, policy layout, and governance characteristics of AI data governance in the region, providing valuable insights and references for advancing the global paradigm of AI data governance. [Method/Process] This paper systematically collects core AI data governance policy documents from 10 EU member states and the United Kingdom through multiple channels. By manually reviewing and selecting policy units related to "AI data governance," the paper traces the development process and uses a three-dimensional analytical framework - governance goals, governance bodies, and governance tools - to reveal the policy layout and governance characteristics of AI data governance in the EU. [Results/Conclusions] The study found that AI data governance in the EU has transitioned from soft law guidance to hard law regulation, gradually establishing three key governance goals: data ethics protection, data security defense, and data value release. Through the establishment of a multi-level legislative system and a coordinated execution framework, the EU focuses on regulatory constraints, procedural norms, AI system element support, and data ecosystem construction, demonstrating comprehensive governance capabilities. First, the EU has constructed a consensus framework for data governance through unified norms, centrally coordinating the diverse needs of member states during policy implementation, ensuring high consistency of governance rules across the EU. Second, the EU's policy design strikes a balance between rule uniformity and national autonomy, allowing member states to adjust policies flexibly according to their unique data cultures and industrial structures, fostering better localized governance. Third, the EU's governance model achieves a dynamic balance between "strong regulation" and "promoting development," ensuring the protection of citizens' rights through stringent ethical and risk prevention measures, while fostering innovation by releasing data value and driving AI industry growth. This paper provides a systematic analysis of the layout and characteristics of AI data governance in the EU. Future research could compare the EU framework with AI data governance policies in other major economies, such as the United States and China, to identify their respective strengths and weaknesses.

  • ZHANG Li, WANG Bo, JING Shui
    Journal of library and information science in agriculture. 2025, 37(5): 58-71. https://doi.org/10.13998/j.cnki.issn1002-1248.25-0297

    [Purpose/Significance] As generative artificial intelligence (AI) transforms library services, existing evaluation systems fail to capture dynamic characteristics of AI-driven resource discovery. This study develops a dynamic evaluation framework for public libraries' AI-enhanced services, addressing the gap between technological innovation and service assessment. [Method/Process] The research employed a mixed-methods approach to develop and verify a multi-dimensional evaluation framework based on Knowledge Organization Systems (KOS) theory. The framework comprises five primary dimensions: physical environment, technical architecture, content organization, user interaction, and innovation capability-operationalized through fifteen secondary indicators. Each indicator was carefully designed to capture AI-specific capabilities, including cognitive guidance efficiency, multimodal interaction precision, semantic network depth, and generation-enhanced utilization rate. A sophisticated hybrid weighting methodology was implemented, integrating subjective and objective approaches. For subjective weights, the Analytic Hierarchy Process was employed with 30 domain experts constructing pairwise comparison matrices using standardized scaling methods. Geometric mean aggregation was applied to synthesize individual judgments, with consistency ratios maintained below the threshold to ensure logical coherence. For objective weights, the entropy method analyzed actual evaluation data variance, with greater variance indicating higher discriminatory power. The final weights were derived through multiplicative synthesis combining both approaches. The empirical validation study involved collecting 492 valid questionnaires from 14 strategically selected public libraries representing different stages of AI implementation between September and November 2024: one municipal library with comprehensive AI deployment, 11 district libraries with partial implementation, and 2 county libraries in early adoption phases. The questionnaire utilized a five-point Likert scale to assess real-time service performance across multiple scenarios. Statistical analysis employed fuzzy comprehensive evaluation to handle uncertainty in subjective assessments, structural equation modeling to validate construct relationships, and latent class analysis to identify distinct user interaction patterns. The framework demonstrated high reliability with Cronbach's alpha reaching 0.845 and strong construct validity with KMO value of 0.873. [Results/Conclusions] Content organization emerged as the most critical dimension with a combined weight of 0.302 2, while semantic network depth, cognitive guidance efficiency, and cross-media consistency ranked as top secondary indicators with weights of 0.090 3, 0.086 1, and 0.084 7 respectively. Performance evaluation revealed content organization scoring 74.873 points versus user interaction at 68.040 points, highlighting the gap between technical capabilities and user experience. Significant differences existed across library levels, with municipal libraries outperforming county libraries by over one point in technical architecture and semantic network depth. Four distinct user patterns emerged: technology-oriented, content-immersive, efficiency-focused, and assistance-dependent. Each requires a tailored service approach. The study proposes the following optimization strategies: multimodal interaction frameworks, adaptive user profiling, hierarchical collaboration mechanisms, and knowledge graph-based content reorganization.

  • CHANGHao, XUTaotao, LIFeng
    Journal of library and information science in agriculture. 2025, 37(8): 61-77. https://doi.org/10.13998/j.cnki.issn1002-1248.25-0365

    [Purpose/Significance] In cross-domain natural language processing (NLP) tasks, deep learning models often exhibit performance variations due to texts with distinct domain characteristics, leading to a decline in model generalization capabilities. Text complexity stands out as one of the most explanatory factors influencing model generalization. [Method/Process] This paper presents two innovative contributions. First, a multi-dimensional text complexity calculation framework grounded in systemic functional linguistics theory was constructed. This framework employs a hierarchical quantification approach: at the lexical level, it dynamically identified four types of non-standard expressions - abbreviations, emoticons, internet buzzwords, and alphanumeric mixed words - and calculated a normative score using a non-linear formula. At the sentence level, an innovative inverse fusion enhancement method (IFEM) was proposed, integrating punctuation anomaly density (weight 0.1), colloquial word ratio (weight 0.4), semantic ambiguity (weight 0.2), and sentence length features (weight 0.3), and generating a structural score through modeling of feature synergy and suppression effects along with an adaptive weighting mechanism. Finally, at the corpus level, a weighted fusion output the global corpus complexity assessment. Experimental results demonstrated that this framework successfully quantifies intrinsic differences between domain texts. For instance, the measured complexity of the waimai_10k dataset reached 0.703, significantly higher than the 0.552 of the ChnSentiCorp_htl_all dataset, and it accurately captured complexity changes even after internal text reduction and substitution operations. Second, a knowledge base-enhanced dynamic adaptive CNN-BiLSTM model was designed. This model implemented the following innovative mechanisms: 1) The knowledge base adopts a dual mapping architecture of text-label and vector-label, supporting historical experience knowledge loading and real-time error recording; 2) Feature weights were adjusted based on the knowledge base content, such as strengthening positive semantic representations or weakening negative expressions. The model architecture integrated multi-scale CNN convolutional kernels for local feature extraction, bidirectional long short-term memory networks for capturing long-distance dependencies, and an attention mechanism to focus on key information. To validate the effectiveness of the proposed methods, experiments were conducted on four Chinese datasets. [Results/Conclusions] The results indicate that the complexity calculation framework exhibits strong robustness, with complexity fluctuations below 3.3% after a 20% sample reduction, and a maximum complexity increase of 13.8% upon short text data injection. Moreover, the framework effectively quantifies and differentiates text complexities, as evidenced by the 0.703 complexity of the waimai_10k dataset compared to the 0.552 of the ChnSentiCorp_htl_all dataset. Additionally, the proposed model demonstrated optimal performance across both the most standardized ChnSentiCorp_htl_all dataset and the most challenging waimai_10k dataset (achieving accuracies of 0.923 8 and 0.943 4, respectively), significantly outperforming Transformer and various large language models such as deepseek-v3 and qwen-plus.

  • MA Haiqun, MAN Zhenliang
    Journal of library and information science in agriculture. 2025, 37(6): 4-19. https://doi.org/10.13998/j.cnki.issn1002-1248.25-0271

    [Purpose/Significance] In the context of the booming development of the digital economy, public data, as a fundamental strategic resource of the country, plays an important role in promoting high-quality economic development, enhancing government governance capabilities, and stimulating social innovation vitality through its development and utilization. The "Opinions on Accelerating the Development and Utilization of Public Data Resources" (hereinafter referred to as the Opinions), is the first top-level design document for the development and utilization of public data resources at the central level. It directly affects the success or failure of the market-oriented allocation reform of data elements in terms of policy effectiveness. Therefore, a comprehensive and systematic evaluation of the Opinions not only helps identify the strengths and weaknesses of policy design, but also provides scientific basis for the continuous optimization of policies, thereby ensuring the efficient development and utilization of public data resources and providing strong support for high-quality economic and social development. [Method/Process] This study introduces an innovative evaluation framework called S-CAD (Consistency Dependency Sufficiency) evaluation framework, which analyzes policy texts in depth through three dimensions: consistency, sufficiency, and dependency. 1) Consistency analysis focuses on the logical coherence between policy positions, goals, means, and expected outcomes. 2) Necessary and sufficient analysis evaluates the necessity and adequacy of policy measures for achieving goals. 3) Dependency analysis identifies key chains and stakeholders' interests and demands in policy implementation to evaluate the feasibility and acceptability of the policy. In terms of specific applications, this study first clarifies the dominant viewpoint of the policy (policy makers) and related viewpoints of policy implementers, participants, influencers. Subsequently, four typical elements of stance, objectives, means, and expected outcomes were identified from the policy text, and an analysis chart of the content of the Opinion was constructed. Inviting scholars from the field of information resource management to participate ensured the evaluation's scientificity and accuracy. Consistency analysis shows that the policy stance, objectives, means, and expected outcomes of the Opinion are logically closely related. The objectives revolve around accelerating the development and utilization of public data resources, and the means and objectives support each other. The expected outcomes are highly consistent with the means, reflecting the systematic and rational design of the policy. The analysis of necessity and sufficiency shows that policy measures play an important role in achieving goals, such as deepening the reform of data element allocation and regulating the authorized operation of public data, all of which provide strong guarantees for achieving policy goals. A dependency analysis reveals potential challenges in policy implementation. These challenges include difficulties in coordinating departmental interests, unclear details of data authorization operations, insufficient data quality and availability, and public concerns about privacy protection. In response to these issues, this study proposes suggestions such as strengthening departmental collaboration, clarifying data authorization operation processes, improving data quality and availability, strengthening data security management and privacy protection publicity. [Results/Conclusions] The issuance of the Opinions provides an important policy framework and guidance for the development and utilization of public data resources, but there is still room for improvement in areas such as departmental collaboration and privacy protection. To enhance public trust and support for policies, future, policy measures should be further refined, data authorization operation mechanisms should be optimized, data quality and utilization efficiency should be improved, and data security management and privacy protection should be strengthened. By continuously monitoring the development trends of the data industry and adjusting policies in a timely manner, we ensure the efficient and orderly promotion of the development and utilization of public data resources. This approach injects strong impetus into the high-quality development of the economy and society.

  • ANBo
    Journal of library and information science in agriculture. 2025, 37(12): 81-94. https://doi.org/10.13998/j.cnki.issn1002-1248.25-0422

    [Purpose/Significance] Although traditional Chinese Medicine (TCM) classics contain valuable knowledge they remain difficult to process automatically due to their complex page layouts, coexistence of traditional and simplified variant characters, alias-rich terminology, and strong cross-paragraph semantic dependencies. Existing pipelines often split the processes of optical character recognition (OCR), normalization, entity recognition, relation extraction, and entity alignment. This leads to error propagation. Additionally, many studies also focus on modern clinical texts rather than historical sources. This paper addresses these gaps by presenting an end-to-end pipeline that transforms ancient page images to a structured knowledge graph. The central contribution is the CoTCMKE, which is a chain-of-thought (CoT) and ontology-constrained joint model that performs named entity recognition (NER), relation extraction (RE), and entity alignment (EA) simultaneously. By making intermediate reasoning explicit and binding predictions to a TCM ontology, the framework improves batch digitization efficiency, extraction accuracy, and interpretability for digital humanities and library & information science (LIS) applications. [Method/Process] We built a unified pipeline with three steps. 1) Text recognition: a multimodal large language model (MLLM) recognizes text directly from complex pages with mixed vertical/horizontal layouts and performs context-aware traditional-to-simplified conversion. 2) Ontology construction: following semantic completeness, multimodal friendliness, evolvability, and interoperability, experts curate an ontology of core TCM concepts (e.g., diseases, symptoms, formulae, herbs) with aliases and constraints to guide decoding and ensure consistency. 3) Knowledge extraction: CoTCMKE integrates CoT with ontology constraints for multi-task extraction, which is entity localization and normalization, ontology-consistent relation generation, and cross-passage/cross-volume entity alignment. Constraint-aware decoding uses immediate checks and backtracking when a generated entity or relation violates ontology rules or alias mappings. For data, we used Shang Han Lun. Qwen2.5-VL-32B assists OCR, conversion, and initial auto-labeling; two TCM-trained annotators independently review and reconcile results. The final sets contain 2 340 NER items, 1 880 RE items, and 450 EA pairs, evaluated with 10-fold cross-validation. The multimodal large language model (MLLM) was adapted via LoRA with early stopping. The comparisons include traditional deep models, a unified IE framework, prompt-only inference, and a LoRA-SFT baseline. [Results/Conclusions] On Shang Han Lun, CoTCMKE outperformed LoRA-SFT by +3.1 F1 for NER, +1.6 for RE, and +1.3 for EA. In cross-book transfer to Jin Kui Yao Lue, the model maintained stable performance without retraining, indicating robustness and scalability. Ablation results showed that CoT reduced boundary and ambiguity errors, while ontology constraints curbed illegal triples and alias fragmentation. Combining both yielded the best overall results. The analysis yielded the following observations. 1) explicit medical relation templates act as semantic guardrails; 2) proactive alias consolidation before decoding reduces entity scattering and improves alignment; 3) explicit type-path guidance helps disambiguate fine-grained categories (e.g., pulse findings vs. general symptoms). The framework supports the automatic construction of "formula-symptom-herb" triples, as well as alias and variant normalization. It also supports evidence-linked semantic searches and navigation, which benefit LIS workflows, education, and research. Current limitations include the scope of the curated ontology and its focus on two classics. Future work will extend to additional TCM classics and broader historical corpora, support continual incremental learning, and deliver knowledge services based on the constructed graphs.

  • CHENGFan, GULiping
    Journal of library and information science in agriculture. 2025, 37(8): 78-91. https://doi.org/10.13998/j.cnki.issn1002-1248.25-0315

    [Purpose/Significance] This paper focuses on the development process and service mechanisms of Japan's Research Data Cloud (RDC) system, a core national infrastructure coordinated by the Research Center for Open Science and Data Platform (RCOS). Against the backdrop of growing global attention to open science, RDC presents a practical model for integrating data management, open sharing, publication, search, and preservation throughout the research lifecycle. The paper highlights the unique collaborative model of RDC, which is characterized by a small team driving large networks. Compared to prior literature that often emphasizes technical architectures or isolated institutional efforts, the paper situates RDC within Japan's broader open science strategy, offering both theoretical and practical insights. It explores how RDC contributes to advancing the FAIR data principles, supporting cross-sector innovation, and strengthening national science and technology governance. The analysis also offers strategic lessons for China in building a sustainable and service-oriented research data system. [Method/Process] Using a qualitative case study approach, the paper draws on a combination of primary and secondary sources, including official reports, project documentation, and academic literature, and publicly available platform data related to the RDC initiative. It systematically analyzes the organizational structure and collaborative mechanisms of RDC, focusing on the institutional roles, platform components (GakuNin RDM, WEKO3, CiNii Research), and key technological innovations such as data governance, data provenance, secure computing, and trusted storage. In particular, it analyzes how RCOS functions as a neutral coordinator that bridges stakeholders across ministries, universities, and research organizations, and how it plays a role in translating policy mandates into technical services, integrating institutional workflows, and fostering community participation in the open science ecosystem. [Results/Conclusions] Despite constrained resources, RDC has developed a comprehensive research data ecosystem that serves researchers, data managers, librarians, industry, and the public. Japan's experience demonstrates that emphasizing interoperability, governance coordination, and capacity building, especially through small-scale research teams and nationwide collaborative networks, can effectively support the development of robust research infrastructure. The paper concludes by proposing several recommendations for China: the creation of independent coordination agencies to avoid fragmented development, the establishment of standardized service frameworks to enhance interoperability, and the implementation of tiered training programs to improve data literacy and management capacity across disciplines. Future research should further explore comparative institutional models, examine the long-term sustainability of open science ecosystems under different governance conditions, and investigate the cultural, legal, and technical dimensions that shape localized approaches to research data governance.

  • GAO Dan, CUI Bin
    Journal of library and information science in agriculture. 2025, 37(7): 61-72. https://doi.org/10.13998/j.cnki.issn1002-1248.25-0285

    [Purpose/Significance] As digital technology continues to reshape the preservation and utilization of cultural heritage, the study of the value co-creation of cultural heritage data resource has gained increasing importance. The growing significance of cultural heritage data, coupled with advancements in digital tools such as big data, artificial intelligence, and virtual reality, require a deeper understanding of the collaborative processes that create value. This research focuses on the value co-creation mechanism of cultural heritage data resources, aiming to offer new perspectives on how diverse stakeholders, including cultural heritage institutions, digital technology providers, and the public, interact dynamically across different stages of data resource management. By proposing a three-dimensional analysis framework based on "stages-subjects-scenarios," this study not only enhances the understanding of the co-creation process, but also contributes to the academic field by exploring the role of different stakeholders in different contexts. The innovation lies in the application of this framework to analyze the specific mechanisms of value co-creation, highlighting the different involvement levels of stakeholders in various stages of data management and usage. The study provides practical implications for improving the management and utilization of cultural heritage data resources, particularly in the context of fostering interdisciplinary collaboration and community engagement. [Method/Process] The study takes an integrated approach, combining case analysis, stakeholder theory, and qualitative research methods, with a particular focus on expert interviews and case study reviews. Through a systematic review of both domestic and international examples, the research explores how different phases of data management - such as data collection, integration, sharing, and application - unfold in practice. The case studies were selected using a multi-source approach, which includes authoritative recommendations, literature reviews, and online surveys to ensure a diverse set of representative projects. We analyzed each case to identify the key stages and stakeholders, and how they interact within specific application scenarios. The theoretical foundation is grounded in stakeholder theory and value co-creation frameworks, while empirical evidence was drawn from ongoing projects in the digital humanities and cultural heritage fields. Using this combination of theoretical and empirical sources, the research developed a thorough understanding of how value co-creation mechanisms evolve and manifest in the context of cultural heritage data management contexts. [Results/Conclusions] The research reveals that the value co-creation of cultural heritage data resources involves multiple stakeholders, each contributing differently at various stages of the process. The identified stages include data collection, integration, sharing, application, and dissemination, each with distinct stakeholder involvement. Key stakeholders include cultural heritage institutions, digital technology providers, content creators, government bodies, and the public, each playing a critical role at different phases. For instance, cultural heritage institutions are central during the data collection and preservation stages, while content creators and developers take a more prominent role during the application and innovation stages. The research also identifies that stakeholder participation varies across different application scenarios, such as digital exhibitions, educational projects, and creative industries. The study concludes that achieving effective value co-creation requires a flexible, collaborative approach, tailored to the specific needs of each stage and scenario. Recommendations for future practice include improving collaboration between stakeholders, encouraging public participation, and establishing clearer frameworks for data governance and intellectual property rights.

  • ZHANGNing, HEBoyun
    Journal of library and information science in agriculture. 2026, 38(2): 16-29. https://doi.org/10.13998/j.cnki.issn1002-1248.25-0345

    [Purpose/Significance] The global population is aging at an unprecedented pace. As a key tool to address the challenges of digital inclusiveness for the elderly, developing a digital capital scale is of utmost importance. Digital capital not only encompasses the abilities and skills of the elderly in using information technology, but also focuses on the interaction among the social resources, cultural capital, and economic capital they acquire in the digital environment. Therefore, it helps enhance the theoretical understanding of the heterogeneity of the elderly's digital capabilities. [Method/Process] First, a semi-structured interview method was adopted to conduct in-depth interviews with 24 elderly individuals based on the digital capital framework, and combined with the digital life scenarios in China. We also referred to existing studies on the digital literacy and digital capabilities of the elderly. Based on the coding results of the interview transcripts, a 7-dimensional scale for measuring the digital capital of the elderly was derived. Then, a preliminary reliability and validity analysis was conducted on a pre-test sample of 180 respondents, and the dimension indicators were appropriately adjusted. Subsequently, using the data from 380 formal questionnaires, the scale was verified and improved. Based on the principle of conceptual interpretability, the factor names of the four dimensions were re-examined, and the final version of the scale was established. Elbow estimation and the K-means clustering algorithm were then used to classify the digital capital levels of the elderly. [Results/Conclusions] The final scale consists of 19 items, covering four dimensions: digital resource acquisition ability, digital creation and expression ability, digital environment adaptation ability, and digital tool learning ability. Following optimization, the scale demonstrates excellent reliability and validity, and aligns closely with the aging-friendly scenarios. The tool can be used as a standardized tool to measure the digital capital level of the elderly population in China, laying the foundation for future large-scale surveys. By applying this scale, it is possible to effectively distinguish between groups of elderly individuals with varying levels of digital capital, providing empirical support for personalized digital services for the elderly people. For the first time, this study systematically applies the digital capital theoretical framework to the elderly population, which compensates for the lack of standardized measurement tools and highlights the unique needs and challenges of the elderly in terms of the dimensions, usage scenarios, and capability transformation. The proposed hierarchical model of digital capital among the elderly deepens our theoretical understanding of the differences in digital capabilities among this population.

  • ZHAOHui, CHENJinghao, GUOSha, LIZhixing, YANLongfei
    Journal of library and information science in agriculture. 2025, 37(11): 4-29. https://doi.org/10.13998/j.cnki.issn1002-1248.25-0729

    In the digital economy era, the efficient, secure, and compliant circulation of cross-border data flow has become a key issue for the coordination of global industrial chains and the deepening of regional cooperation. It is a driving force for the high-quality development of the global digital economy. Currently, cross-border data flow is confronted with multiple challenges, including the interweaving of driving forces and contradictions, inadequate adaptation between mechanisms and technologies, and poor connection between compliance requirements and practical implementation. There is an urgent need to formulate systematic solutions from both theoretical and practical perspectives. To this end, this journal has invited five experts from universities and enterprises to organize a roundtable discussion on the complete logical chain of "the underlying logic, mechanism construction, trend prediction, compliance governance, and scenario-based implementation of cross-border data flow". The key viewpoints are as follows: 1) Dynamic Mechanism and Governance Logic of Cross-border Data Flow: Cross-border data flow is jointly driven by three major forces: economic interests, technological innovation, and international cooperation. Meanwhile, it faces core contradictions including the trade-off between sovereign security and flow efficiency, fragmentation of rules and institutional coordination, and technological balance and the digital divide. It is necessary to establish a governance philosophy of "dynamic balance" and build a multilateral co-governance system through three types of tools-algorithm-based supervision, technology empowerment, and institutional experimentation-to promote the shift from "fragmented rule-based games" to "systematic coordination". 2) Construction of a Collaborative Mechanism for Cross-border Data Flow: The mechanism for cross-border data flow needs to break through the limitations of a single dimension and form a multi-dimensional collaborative system integrating "policy, technology, and industry". At the policy level, regulatory sandbox pilots, standard mutual recognition, and compliance infrastructure sharing are adopted to address regulatory barriers. At the technical level, scenario-specific needs are met based on a maturity gradient, and the integrated innovation of "technology + management" is promoted. At the industry level, the self-regulatory role of professional fields such as library and information science (LIS) is leveraged to compensate for the rigidity of policies and build a closed-loop governance structure. 3) Trend Evolution and Risk Resilience of Cross-border Data Flow: In the next 3 to 5 years, cross-border data flow will exhibit characteristics of structural growth and domain differentiation. Smart manufacturing and digital trade will drive growth on a large scale, while smart healthcare and modern agriculture will emerge as core sectors. It is imperative to address bottlenecks in infrastructure upgrading and the impact of "black swan" events, establish a risk resilience system from technical, governance and strategic dimensions, and promote service model innovation in LIS as well as advance layout in the agricultural sector. 4) Compliance Governance and China's Path for Cross-border Data Flow: China has established a hierarchical and classified governance framework centered on three fundamental laws, and explored practical paths through institutional innovations such as the negative list system in free trade pilot zones. To tackle challenges including discrepancies in legal compliance requirements, technical barriers, and the complexity of regulatory coordination, it is necessary to strengthen legal synergy and rule mutual recognition, advance infrastructure construction and technological innovation, and improve the compliance service support system, thereby forming a China-specific path that balances security and controllability with high efficiency and convenience. 5) Practice of Cross-border Data Circulation and Credit Product Mutual Recognition: Cross-border data circulation lays a core foundation for the cross-border mutual recognition of credit products, which holds significant strategic value for promoting the facilitation of international trade and supporting the international development of enterprises. Currently, it faces challenges such as data security compliance, standard discrepancies, and high technical costs. To advance the implementation of cross-border mutual recognition of credit products, efforts should be made to improve the legal and regulatory framework and standard system, strengthen the construction of technical infrastructure, deepen international cooperation and mutual recognition mechanisms, and cultivate international credit service institutions.

  • LIU Hao, JIN Xiaohe
    Journal of library and information science in agriculture. 2025, 37(7): 73-90. https://doi.org/10.13998/j.cnki.issn1002-1248.25-0372

    [Purpose/Significance] Studying whether the development of the digital economy can boost rural household consumption is related to expanding rural consumption potential in the digital age. This is significant for leading the country's overall economic development and overcoming obstacles that restrict the growth of domestic demand. The research topic has been expanded to include research related to the digital economy. The contribution of this paper lies in the following aspects. First, few scholars currently consider refining the types of consumption for the research between the two. This paper starts from the heterogeneous consumption structure to explore the differences in the impact of the "broadband rural" policy on the consumption structure of rural households creating diversified consumption needs and experiences. This promotes new consumption, and further taps the consumption potential of rural households. Second, previous scholars primarily focused on macro-city data, while this paper uses micro-level data from the China Family Panel Studies (CFPS) from 2010 to 2022 to extend the identification period of the effects of policy dynamics. Based on the level of farmers, this paper examines the differential impact of the digital economy on the individual consumption behavior of farmers. Third, it introduces family endowment into the influence mechanism of digital economy on farmers' household consumption, discusses the adjustment mechanism of endowment difference in policy influence, and supplements the research perspective of previous scholars. [Method/Process] Based on the data of China Family Panel Studies (CFPS) from 2010 to 2022, this paper constructs seven periods of unbalanced panel data, takes the "broadband rural" policy as a quasi-natural experiment, adopts the methods of difference-in-differences, triple difference method and PSM-DID, and combines Keynesian absolute income hypothesis, information asymmetry theory and precautionary savings theory to evaluate the impact of digital economy on farmers' household consumption. [Results/Conclusions] As a result, the digital economy has significantly promoted the consumption of rural households, but it is not significant in the impact of enjoyment consumption. Combined with mechanism analysis and heterogeneity analysis, family endowment has a significant moderating effect, and the impact of digital economy has group differences. Based on this, this paper puts forward some countermeasures and suggestions to promote the dividend sinking of digital economy development, focus on the support of heterogeneous groups, and reasonably advocate new consumption. It can be seen that the impact of digital economy on the consumption of peasant households still needs to be further explored, which is of great significance to realize the domestic cycle and international double cycle. However, this is difficult to achieve due to data limitations and the need for long-term tracking. Therefore, in the future, the effect analysis of the "broadband village" policy will be extended to analyze its long-term impact on the consumption of peasant households.

  • FENGLi, GUOBochi, GAOMian
    Journal of library and information science in agriculture. 2026, 38(1): 58-70. https://doi.org/10.13998/j.cnki.issn1002-1248.25-0444

    [Purpose/Significance] The rapid expansion of artificial intelligence generated content (AIGC) is transforming how intellectual property (IP) literacy is cultivated in universities. Conventional approaches, often constrained by disciplinary fragmentation, uneven teaching capacity, and time–space limitations, are increasingly misaligned with human-AI collaborative learning. Against this backdrop, IP literacy must integrate legal knowledge, ethical judgment, compliance awareness, and AI-enabled creative practice. This study clarifies the renewed connotations of IP literacy in the AIGC era, develops a theoretically grounded model of influencing factors, and examines how multiple educational conditions combine to generate high-level outcomes. By focusing on IP literacy rather than generic digital competence, the paper addresses a clear gap in existing research and offers a configuration-based understanding that links theory to implementable strategies for intelligent, student-centered IP literacy education. [Method/Process] Grounded in Activity Theory, the study developed a six-dimensional framework consisting of the following variables: teacher professional competence, AI-IP awareness, diversified educational support, role division, evaluation mechanisms, and AI resources. These variables were operationalized via a structured questionnaire. Fuzzy-set Qualitative Comparative Analysis (fsQCA) was then employed to identify conjunctural causality and equifinal pathways that extend beyond linear models. High-outcome configurations were achieved through variable calibration, truth-table analysis, and minimization. Robustness was confirmed by tightening the PRI consistency threshold from 0.80 to 0.85. The path structure, overall coverage, and overall consistency remained stable. [Results/Conclusions] Findings show that AIGC-enabled IP literacy emerges through multiple effective configurational paths, rather than a single dominant factor. Across high-outcome configurations, teacher professional competence, AI–IP awareness, and diversified educational support consistently function as core drivers that shape learning processes and outcomes. Evaluation mechanisms and AI resources act as complementary or substitutive conditions, reinforcing effectiveness under specific institutional and resource constraints. Three typical paths were identified: a path emphasizing practice generation coupled with collaborative organization; a path that integrates resource sharing with practice-oriented development; and a path highlighting collaborative division of labor and effective communication to compensate for limited technical supply. Together, these paths confirm the internal logic of the six-dimensional model and demonstrate that coordinated configurations, rather than isolated improvements, are necessary to optimize IP literacy education in AI-rich contexts. Practical implications include strengthening AI-oriented teacher development, embedding AI-IP awareness in curricula and supporting services, building cross-unit collaboration mechanisms, and aligning role division and process evaluation with available AI resources. Although the cross-sectional design and limited scope constrain generalizability, the results provide a theoretically grounded and empirically supported basis for developing intelligent, collaborative, and student-centered IP literacy systems and offer a foundation for future longitudinal and comparative research in AIGC-enabled higher education.

  • CHEN Yuanyuan, FU Bin, GAO Yuan, QIAO Junwei
    Journal of library and information science in agriculture. 2025, 37(6): 55-69. https://doi.org/10.13998/j.cnki.issn1002-1248.25-0275

    [Purpose/Significance] With the rapid advancement of global science and technology, emerging technologies are constantly evolving, placing higher demands on national strategic planning and resource allocation. Artificial intelligence (AI), as a core driver of the current technological revolution, requires close attention to its internal technical topic evolution to anticipate disruptive changes and guide the direction innovation. Although existing research primarily focuses on identifying technical topics through bibliometric or patent analysis, there is still insufficient quantitative forecasting of their future development. To address this gap, this study proposes an integrated analytical framework that combines BERTopic-based topic modeling with an IWOA-optimized BiLSTM neural network, achieving a unified approach to both topic identification and trend forecasting. Unlike traditional LDA models or expert-based subjective judgment, this method demonstrates significant advancements in semantic representation, model optimization, and prediction accuracy. It expands the theoretical boundaries of emerging technology forecasting and offers valuable quantitative support for science and technology policy-making. [Method/Process] This study utilized 22,243 AI-related patent records collected from 2015 to 2024. BERTopic was applied to extract representative technology topics from patent abstracts. A multi-dimensional evaluation system was constructed using three indicators: novelty, impact, and growth rate, capturing different aspects of emerging technologies. The CRITIC method was employed to objectively assign weights to each dimension, enhancing the robustness and balance of the composite index. BERTopic, which integrates BERT-based semantic embeddings with HDBSCAN density-based clustering, improves both the coherence and granularity of topic extraction. For trend prediction, an Improved Whale Optimization Algorithm (IWOA) was introduced to fine-tune BiLSTM's learning rate, epoch count, and hidden layer size. IWOA enhances global optimization through Gaussian chaos initialization, Levy flight strategy, nonlinear control factors, and elite reverse learning. [Results/Conclusions] Experimental results show that BERTopic achieves superior topic coherence compared to baseline models and successfully identifies five emerging technical areas, including Intelligent Models and Algorithms, Information Processing, Deep Neural Networks, Smart Robotics, and Numerical Control Systems. The IWOA-BiLSTM model outperforms conventional LSTM and BiLSTM models in error metrics (MAPE, RMSE, and MAE), confirming its predictive advantage. Forecast results indicate that these emerging topics will experience sustained growth over the next five years, reflecting strong application potential and industrial value. This study confirms the feasibility and effectiveness of the integrated "identification–prediction" framework, providing a data-driven tool for strategic decision-making in science and technology development. Limitations include dependence on data quality and a current focus on the field of AI. Future research should expand the framework to other strategic areas, such as renewable energy, biomedicine, and intelligent manufacturing, to further validate its generalizability.

  • HOUYanhui, WANGZixuan, WANGJiakun
    Journal of library and information science in agriculture. 2026, 38(1): 44-57. https://doi.org/10.13998/j.cnki.issn1002-1248.25-0395

    [Purpose/Significance] Starting from the perspective of technological complementarity, this paper proposes a new approach for identifying technological opportunities by comprehensively using outlier patents and hot patents. The fusion analysis of innovative outlier patents and market mature hot patents is carried out to identify "innovation maturity" technological opportunities that combineinnovation and maturity, which is of great significance for enriching the theory and methods of technological opportunityidentification. [Method/Process] First, based on the "association distribution" characteristics of patent classification numbers, a twostagemethod was adopted to screen patents. In the first stage, we used the association rule algorithms to find classification numberswith weak and strong associations, and obtained initial outlier patents and initial hotspot patents. In the second stage, outlier detectionalgorithms were used to obtain the marginalization classification numbers of the two types of patents in the first stage. Patentscontaining marginalization classification numbers were selected as the final outlier patents, while patents containing suchclassification numbers were removed as the final hotspot patents. Second, different methods were adopted for patent screening basedon the differences in innovation and maturity of patent content. Using structured and unstructured data from patent databases, weconstructed time weighted indicators and keyword uniqueness indicators as the screening indicators for innovative outlier patents. Weconstructed a technology lifecycle stage discrimination function and patent market value measurement indicators as the screeningcriteria for mature hot patents in the market. The screened patents were classified into technical fields based on the major categories inthe International Patent Classification. Finally, we identified technological opportunities based on technological complementarity. Byusing the generative topology mapping algorithm to obtain a technical blank point map, the top K keywords in each blank point wereobtained, and the sources of the keywords were marked to ensure that new technological opportunities have both good innovationcapabilities and mature market prospects. In the future, keyword combinations derived from different types of patents were regardedas "innovation mature" technological opportunities. [Results/Conclusions] Taking the field of new energy vehicle batteries as anexample, empirical analysis was conducted to obtain a total of 10 technical opportunities in 5 sub technical fields. Through contentcomparison with relevant policy texts, 7 technical opportunities showed high consistency. It was found that the identification resultswere highly consistent with the current technological layout and development direction of the field, indicating that this method hasgood effectiveness and scientificity in technology opportunity identification, and can provide support for technology prediction andinnovation decision-making.

  • LIUTing, LIUShuhan, LIUZhenyan, ZENGDequan, HUYuan
    Journal of library and information science in agriculture. 2025, 37(9): 32-48. https://doi.org/10.13998/j.cnki.issn1002-1248.25-0408

    [Purpose/Significance] In the digital age, data elements have become a key factor in production, while insufficient bargaining power in the supply chain poses significant operational risks to enterprises. How to leverage the opportunities of the digital economy, maximize the role of data elements, and avoid operational risks caused by insufficient discourse power in the supply chain has become a key issue that enterprises urgently need to address. Investigating how data utilization enhances this power is vital for building resilient supply chains and informing governance decisions. This method is also effective for further utilizing data elements. It provides micro evidence that helps us understand how data elements can optimize resource allocation and empower organizational decision making. [Method/Process] This study employs a rigorous, empirical approach using panel data from China's A-share listed companies from 2003 to 2022. A two-way fixed effects model serves as the primary estimator to control for unobserved heterogeneity. To credibly address potential endogeneity issues, such as reverse causality and sample selection bias, we implemented a comprehensive identification strategy. This methodology incorporates the use of instrumental variables, Heckman's two-stage correction model, and a series of robustness checks including alternative variable constructions and sub-sample analyses. Furthermore, we conducted mechanism analysis to elucidate the transmission channels and heterogeneity analysis to examine conditional effects across different types of firms. [Results/Conclusions] The empirical results demonstrate that the improvement of data element utilization level can effectively strengthen a firm's supply chain bargaining power and reduce the dependence of enterprises on large suppliers and customers, enhance their bargaining power and influence in the supply chain. This conclusion still holds true after robustness tests such as replacing the regression model, adding control variables, and adjusting the sample period. Mechanism analysis results indicate that the utilization level of data elements primarily empowers supply chain discourse through two channels: improving supply chain efficiency and alleviating financing constraints. Firstly, data elements optimize the inventory management, logistics scheduling, and supply chain collaboration of enterprises, improving operational efficiency and reducing dependence on key suppliers and customers. Secondly, data elements improve the information transparency of enterprises, reduce external financing costs, enhance the liquidity of funds, and make them more autonomous and bargaining power in supply chain transactions. A heterogeneity analysis revealed significant differences in the empowering effects of data elements among different types of enterprises. Among them, data elements have a more significant effect on enhancing the discourse power of supply chain for non-labor-intensive and non-asset-intensive enterprises, as well as a stronger promotional effect on non-technology-intensive and non-high-tech industry enterprises. This suggests that companies that rely less on traditional physical resources are better able to use data to gain a competitive advantage. This study establishes a robust theoretical basis for data-driven supply chain management and presents significant policy implications. One limitation is its focus on listed companies. Future research could expand this inquiry to include small and medium-sized enterprises and global supply chain contexts.

  • JIANG Enbo, QIN Yu
    Journal of library and information science in agriculture. 2025, 37(10): 4-21. https://doi.org/10.13998/j.cnki.issn1002-1248.25-0338

    [Purpose/Significance] As artificial intelligence (AI) systems are being widely deployed across diverse domains such as education, healthcare, and public governance, the absence of standardized metadata specifications has led to fragmented descriptions, inconsistent documentation, and difficulties in model evaluation and reuse. This study aims to address the pressing issues of opacity, lack of interpretability, and poor traceability in current AI models, which have increasingly become obstacles to the development of transparent and responsible AI. To overcome these challenges, this study proposes the establishment of a unified metadata specification for AI models to enhance their discoverability, transparency, interoperability, and reusability, thereby advancing the development of trustworthy AI and facilitating effective model governance. [Method/Process] Grounded in metadata quality assessment theory and lifecycle theory, the study adopted a combination of research methods, including literature review, comparative analysis of existing specifications, and questionnaire surveys.We first conducted a systematic examination of domestic and international practices related to AI model metadata specifications to identify representative standards, frameworks, and implementation approaches. Through comparative analysis, the study investigated the structure, element organization, and semantic relationships of different specifications, highlighting their similarities, differences, and areas for improvement. Meanwhile, a targeted questionnaire survey was administered to researchers, developers, and practitioners to explore user awareness, perceptions, practical experiences, and specific needs regarding metadata specification and interoperability. Based on these findings, the study ultimately proposed a lifecycle-oriented framework for metadata specification construction, ensuring that it aligns with the key stages of AI model development, deployment, evaluation, and governance. [Results/Conclusions] The findings reveal that, although users generally recognize the importance of metadata specifications for AI models, they are unaware of of the existing specifications. The current AI model metadata specifications have significant shortcomings in terms of element naming, structural organization, and descriptive granularity. These shortcomings hinder the effective sharing and reuse of model information. In response, the study proposed a comprehensive metadata framework encompassing key entities such as models, datasets, algorithms, technical features, performance evaluations, risks and ethics, legal information, and related resources, as well as the semantic relationships among these entities. The research concluded that establishing a unified metadata specification for AI models not only contributes to effective information management and cross-platform interoperability, but also serves as a critical infrastructure that links technology, ethics, and governance. As the metadata specification system matures and gains wider industry adoption, AI models will become increasingly controllable and trustworthy. This will promote a more regulated, collaborative, sustainable and integrated AI ecosystem.

  • LIChunqiu, GUOJie, TANXu, CHENChen, SONGJia
    Journal of library and information science in agriculture. 2025, 37(11): 62-76. https://doi.org/10.13998/j.cnki.issn1002-1248.25-0557

    [Purpose/Significance] Rural cultural memory is an important component of social memory. It represents a collection of cultural memories related to villages, village histories, and village customs within specific rural spatial-temporal contexts. In the context of digital-intelligence development, the digital-intelligent transmission of rural cultural memory can promote the protection, revitalization, and utilization of rural cultural resources. This study focuses on how intelligent data can empower the digital-intelligent inheritance of rural cultural memory. It reviews construction projects in the fields of rural memory initiatives and cultural heritage, and proposes paths for leveraging intelligent data to facilitate the digital-intelligent inheritance of rural cultural memory from the perspectives of resource, technology, and service. [Method/Process] The research classifies rural memory and rural digital memory, summarizes the smart data studies in the field of culture heritage, investigates and analyzes the current status of representative rural cultural projects and cultural heritage construction projects from the perspectives of resources, technologies and services. At the resource level, multimodal and high-value rural cultural resources and their associated data are aggregated, with wide-ranging sources and diverse data formats. At the technology level, technical support is provided to achieve the integration and correlation of multimodal data. At the service level, the intelligent platform offers multi-scenario services, such as data acquisition, data correlation analysis, and data crowdsourcing. The practical experience of intelligent cultural heritage projects, along with the concept of intelligent cultural heritage data, provides methodological insights and reference paths for the resource construction, technology application, and service implementation in the digital-intelligent inheritance of rural cultural memory. [Results/Conclusions] Smart data provide new concepts of resource integration, new technology application and intelligent service for the inheritance of rural cultural memory. Existing cultural heritage intelligent projects provide approaches for the digital-intelligent inheritance of rural cultural memory. Finally, this study proposes paths for smart data empowering digital-intelligent inheritance of cultural memory from the perspectives of data resource construction, technological innovation, and service philosophy. At the resource level, multiple stakeholders are coordinated to integrate high-quality data resources. At the technology level, efforts should focus on phased objectives and technology aggregation to unlock the value of rural cultural memory. At the service level, the construction of an intelligent service space for rural cultural memory is recommended to address diverse needs. In the future, the digital-intelligent inheritance of rural cultural memory should align with the characteristics of rural cultural resources to construct interoperable smart data models. This will enable the high-level integration and interconnection of digital rural cultural resources. It will foster a model in which digital intelligent technologies and the utilization of rural cultural resources integrate and reinforce each other mutually.

  • XIAO Yufan, CHEN Rui, HUANG Ying
    Journal of library and information science in agriculture. 2025, 37(6): 87-101. https://doi.org/10.13998/j.cnki.issn1002-1248.25-0349

    [Purpose/Significance] As the knowledge economy grows and global technological competition intensifies, universities have become essential drivers of innovation within national innovation systems. Not only do high‑level research universities generate original scientific discoveries, they also serve as catalysts for technological innovation and drivers of industrial upgrading. Their roles span from conceiving breakthrough ideas to shepherding technologies through product development and into marketable applications. Nevertheless, the multifaceted nature of these contributions remains insufficiently characterized, making it difficult for policymakers and university leaders to fine‑tune strategies that maximize impact. A comprehensive understanding of how universities contribute at each stage of the innovation continuum is therefore vital for optimizing their functions, informing targeted policy interventions, and reinforcing the synergetic linkages between academia, industry, and government in both national and global contexts. [Method/Process] To clarify universities' distinct contributions at each stage of innovation, this study presents an innovation value chain model and corresponding analytical framework that systematically maps their core functions - serving as knowledge innovators during basic research, technology developers in applied research, transfer agents in product development, and academic entrepreneurs in commercialization. Based on this model, we constructed an analytical framework comprising qualitative and quantitative indicators tailored to capture university activities at each stage. During the basic research phase, metrics such as publication volume, citation impact, and basic science funding shed light on the roles of universities as innovators of knowledge. During applied research, patent filings, joint industry‑university project counts, and collaborative R&D expenditure serve as proxies for technology development capacity. The product development phase assessment centers on technology licensing volume, spin‑off formation rate, and prototype demonstration projects to gauge technology transfer effectiveness. Finally, commercialization was examined via start‑up success rates, venture funding attracted, and market penetration of university‑originated products. Empirical analysis was conducted on representative samples drawn from China's C9 League universities and the U.S. Ivy League universities, leveraging bibliometric databases, patent offices, and institutional reports to ensure data robustness. [Results/Conclusions] The findings demonstrate that universities in China and the U.S. play distinct yet complementary roles at different innovation stages. Chinese universities exhibit rapidly growing research outputs and increasing basic research capability, signaling a powerful catching‑up momentum in building technological reserves. Their strengths lie primarily in knowledge generation and early‑stage technology development, supported by substantial increases in R&D investment and talent cultivation. In contrast, U.S. universities maintain leadership in original innovation quality and commercialization efficiency, underpinned by high‑impact publications, a mature ecosystem of technology transfer offices, and established venture funding networks. They excel at translating research breakthroughs into market‑ready products and ventures, achieving higher license income per patent and greater market penetration. This comparative analysis underscores the necessity of diverse, stage‑specific university roles and highlights opportunities for cross‑border learning. In the future, Chinese higher education institutions (HEIs) can enhance their commercialization performance by adopting proven U.S. strategies, such as streamlined intellectual property policies, incentive programs for faculty entrepreneurship, and extensive industry partnerships, while adapting these practices to local contexts. By doing so, they can improve the quality and market depth of their knowledge and technology outputs, and optimize the university technology transfer system, thereby providing robust support for achieving sustainable, high‑quality economic development.

  • KEPing, WUJianhua, ZHAOJunling, YANBeini, XIAOPeng
    Journal of library and information science in agriculture. 2025, 37(8): 4-28. https://doi.org/10.13998/j.cnki.issn1002-1248.25-0339

    In May 2025, the General Office of the Ministry of Education and the General Office of the Central Publicity Department jointly issued the Notice on Further Implementing the National Youth Student Reading Initiative (hereinafter referred to as the "Notice"). Based on the 2023 National Youth Student Reading Initiative Implementation Plan, the Notice outlines five key projects to be implemented. These projects aim to promote nationwide reading, support the strategies for building a strong educational country and a strong cultural country, and establish a solid cultural foundation for the growth of young people and the development of the nation. This journal has invited five experts to conduct in-depth discussions on the core requirements and practical paths of the Notice from perspectives including strategic positioning, campus practice, the role of libraries, and home-school-community collaboration. The experts have thoroughly analyzed the key issues and implementation strategies of the National Youth Student Reading Initiative. 1) Strategic Positioning and Systematic Construction of the Youth Reading Initiative: Professor Ping Ke points out that this initiative is the core of nationwide reading and a pillar of the national strategy for building a strong nation. Reading should be integrated into the teaching of all disciplines, not just Chinese language courses. With "improving reading efficiency" as the core focus, efforts should be made to cultivate young people's interest in reading and their ability to think critically, in order to optimize their knowledge structure and values. He proposes a "trinity" reading system in which "schools are the core and libraries and families are the two wings". This system connects multiple parties to form five chains, including those responsible for resource production and dissemination, so as to promote nationwide collaboration. He also suggests ensuring the initiative's sustainable development through legal revisions and long-term planning. 2) Revitalization Path of Rural Primary and Secondary School Libraries: Professor Jianhua Wu points out that rural libraries face several problems, including poor infrastructure, a shortage of professional talent, and insufficient funding. In line with the "Rural School Library Revitalization Plan" mentioned in the Notice, he proposes that each county should build two model primary school libraries and one model junior high school library. He also proposes improving reading spaces and intelligent systems, and allocate full-time staff at a ratio of one staff member to 500-1,000 students. Drawing on Israel's experience, he suggests establishing library service centers, combining public welfare resources to address resource issues, and organizing college student volunteers to return to their hometowns and provide companionship and reading assistance, with the goal of transforming rural libraries into centers that offer high-quality services. 3) Professional Advantages and Empowering Role of Libraries: Professor Junling Zhao emphasizes that academic research on library-based reading promotion provides a theoretical foundation for the initiative. The core advantage of libraries lie in providing high-quality reading materials, organized collections, and free reading spaces. She suggests strengthening research on young people's reading behavior, reading therapy, and activity evaluation, promoting the development of practical toolkits based on the results of this research, and improving the scientific level of practice. 4) Precise Resource Supply through Home-School-Community Collaboration: Professor Beini Yan analyzes the current resource supply contradictions and clarifies the roles of families, schools, and communities. Families should foster a love of reading and provide personalized resources; schools should implement systematic reading programs; and social institutions should offer professional services. To meet the personalized needs of young people, she proposes establishing a hierarchical resource pool, building a circulation network with "internal circulation + external circulation", and using big data to optimize resource matching. 5) Positioning Return and Development Path of School Libraries: Professor Peng Xiao points out that school libraries are one of the "three pillars" of modern library initiatives and are essential to implementing the youth reading initiative. However, they are facing problems such as the "five imbalances" in development, a lack of research discourse, and insufficient innovation vitality. He calls for school libraries to be placed back at the core of China's library initiatives and suggests that future research should focus on five key issues, including clarifying the functional value of school libraries. This will help compensate for the deficiencies in the nationwide reading infrastructure and contribute to building a strong educational country and a strong cultural country.

  • WUYuhao, ZHOUZhihong, LIUWei, XUBangdong
    Journal of library and information science in agriculture. 2025, 37(11): 30-46. https://doi.org/10.13998/j.cnki.issn1002-1248.25-0602

    [Purpose/Significance] From the perspective of value chain collaboration, a trusted data space system adapted to the characteristics of smart library scenarios is constructed, aiming to solve the systematic problems such as fragmented cross-domain integration, a lack of trusted guarantee, and inefficient value transformation in current library data governance. The study will contribute to improving the theoretical framework for governing library data. It also provides practical guidance on balancing the contradiction between data circulation and security, as well as on releasing the operational value of data elements. This helps smart libraries to strengthen their core functions in terms of public cultural service provision and knowledge empowerment. [Method/Process] Adopting a public value approach, we analyzed the coupling logic and value dimension of technical collaboration, rights and responsibilities, and scenario adaptation in the value chain links, as well as the hierarchical improvement laws of the data, knowledge, service and ecosystem layers. This was based on clarifying the four core elements of the trusted data space of smart libraries: data, subject, technology and system. We also examined the characteristics of trusted collaboration and value progression. The collaborative optimization process was examined in conjunctionwas with the links between the various stages of the data lifecycle. The path of expansion for the cross-chain ecosystem was constructed through collaboration between libraries, industry links, and social empowerment. We ensure a high degree of compatibility with the scene requirements of smart libraries. [Results/Conclusions] The trusted data space system of smart libraries consolidates the foundation of data trustworthiness through technological integration, activates the efficiency of the value network through the collaboration of subjects, consolidates the basis of operation guarantee through institutional norms, and extends the coverage boundary of services through value transformation, thus forming a governance pattern of four-dimensional interaction among technology, subjects, systems, and values. Based on this, four collaborative strategies, namely ecological niche reconstruction, capability leap, dynamic risk governance and value closed loop, are proposed. These strategies ultimately facilitate a systematic transition from the aggregation of data resources to the co-creation of ecological value. In the future, the element configuration and collaborative mechanism of the trusted data space can be optimized in combination with the service positioning of different libraries. The goal can be achieved through pilot construction, which will allow us to collect practical data, verify the system's feasibility and effectiveness, and explore the integrated application path of AI large models and trusted data spaces.

  • TANGFeng, FANGXiangming, WANGYixin
    Journal of library and information science in agriculture. 2025, 37(11): 47-61. https://doi.org/10.13998/j.cnki.issn1002-1248.25-0462

    [Purpose/Significance] The digital characteristic collections of libraries are facing significant challenges in terms of data circulation and value, which greatly limits their potential utility. To address these issues, this study proposes to establish a trusted data space specifically designed for the digital special collections of libraries. The main objective is to reduce the costs related to trust and promote the full utilization of its multi-dimensional value in areas such as cultural heritage protection, academic research, industrial innovation, and social education. By creating a secure and interoperable environment for data sharing, the plan aims to transform the way digital special collections are managed, accessed and utilized, thereby enhancing their contribution to broader social goals. [Method/Process] This study centers on the trusted data space to explore the cross-domain circulation and value release mechanism of digital specials. It aims to build a dedicated and trusted data space for libraries, break down data barriers, and activate multi-dimensional value. The investigation follows a structured approach centered on requirements analysis, framework construction and strategy formulation. This research is based on the concept and technical foundation of the trusted data space, taking into account the unique attributes and sharing requirements of digital special collections. A comprehensive theoretical framework has been developed and centers around three core capability streams: resource interaction, trusted governance, and value co-creation. These flows are supported by a five-layer architecture model: infrastructure, data interaction, data elementization, intelligent services, and value realization. To illustrate the practical application of this framework, typical usage scenarios were analyzed to demonstrate how special collected data can be transformed from raw resources into valuable assets, and the characteristics and key tasks of specific stages were examined in detail. In addition, a multi-faceted implementation strategy has been proposed to address real-world challenges, including stakeholder reluctance, technological heterogeneity, and conflicts in rights management. These strategies emphasize the development of intelligent resources, the integration of multi-modal and heterogeneous technologies, policy incentive mechanisms, and the establishment of a sound data element market. [Results/Conclusions] The trusted data space proposed in this paper provides a systematic and effective solution for the trusted circulation and efficient utilization of cultural data. It transforms digital characteristic collections into open and reusable assets, thereby significantly enhancing the quality and scope of public cultural services. This development is in line with and supports the national strategic goals of building a "cultural power" and a "Digital China". Looking ahead, future research should prioritize the shift from theoretical conceptualization to practical implementation. This includes integrating technical solutions with actual service workflows and clarifying the unique role of libraries in the broader data ecosystem. To ensure long-term success and influence, key challenges such as sustainable business models and scientific and reasonable evaluation mechanisms must be addressed.

  • GENG Ruili, WANG Yifan, LI Sentao, WEI Qi
    Journal of library and information science in agriculture. 2025, 37(6): 20-36. https://doi.org/10.13998/j.cnki.issn1002-1248.25-0322

    [Purpose/Significance] Open government data (OGD) has increasingly adopted storytelling elements to improve public engagement and enhance user comprehension. Although this narrative approach enhances data accessibility and cognitive resonance, it raises significant privacy concerns. Specifically, storytelling may activate users' cognitive schemas, enabling them to infer sensitive personal information even from anonymized datasets. This dual effect between data usefulness and privacy risk is becoming an increasing challenge for data providers and policymakers. In this study, we aim to explore how storytelling in OGD affects users' cognitive reasoning processes and leads to privacy risks. Our work innovatively combines cognitive psychology, information science, and privacy risk assessment. This interdisciplinary approach offers a new perspective on how data narratives shape inference behavior. Distinct from existing research, this paper focuses on how cognitive mechanisms driven by storytelling influence users' perception and extraction of private information. This research holds practical significance for designing privacy-aware data disclosure strategies that strike a balance between openness and protection. [Method/Process] In order to analyze the cognitive mechanisms underlying privacy risk, we adopted a mixed-methods research design grounded in relevance theory, schema theory, and the S-O-R model. We first constructed a user cognitive connection model that conceptualized how narrative stimuli activated cognitive processing and led to privacy-related inferences. Based on this model, we developed a privacy risk assessment index comprising three primary dimensions: data association and reasoning, data processing and decoding, and implicit suggestion and implication. We then conducted a controlled experiment involving 236 participants, who were randomly divided into a storytelling group and a non-storytelling group. To analyze the collected data, we used the CRITIC method to assign objective weights to evaluation indicators and applied a fuzzy comprehensive evaluation method to quantify and compare privacy risks across groups. [Results/Conclusions] Our results demonstrated that storytelling significantly heightened users' ability to infer sensitive personal information. The average inference score in the storytelling group was significantly higher than that in the non-storytelling group (p<0.05), and the comprehensive privacy risk level was rated as "medium risk" compared to the non-storytelling group's "low risk." Across all three risk dimensions, the storytelling group consistently exhibited greater cognitive engagement and higher potential for privacy exposure. These findings suggested that while storytelling enhanced user understanding, it also increased the risk of privacy violations. As such, we recommended that government data platforms adopt non-storytelling or partially abstracted data presentation strategies to reduce risk while preserving clarity. From a policy perspective, we advocated for the integration of intelligent narrative-generation algorithms and privacy-by-design principles to protect users' information. Although limited by sample size and data diversity, this study offered a foundation for future research into the cognitive underpinnings of privacy risk. Further work may explore other forms of storytelling, demographic influences on inference behavior.

  • LIShuqi, LIJian
    Journal of library and information science in agriculture. 2026, 38(1): 79-94. https://doi.org/10.13998/j.cnki.issn1002-1248.25-0459

    [Purpose/Significance] Digital hoarding has emerged as a significant behavioral phenomenon in the digital age, particularly prevalent among social media users who engage in the excessive acquisition and retention of digital content. This behavior is further amplified by algorithmic recommendation systems that continuously personalize content delivery. Although existing research has examined individual psychological factors or platform characteristics using static approaches, it lacks a dynamic perspective to understand the co-evolutionary relationship between platform strategies and user behaviors. This study addresses this research gap by introducing evolutionary game theory as an innovative analytical framework. Theoretically, the significance lies in modeling the dynamic interactions between platforms' algorithmic adjustments and users' hoarding behaviors. This provides new insights into the adaptive mechanisms within socio-technical systems. From a practical standpoint, this research offers valuable implications for promoting healthier digital environments and developing sustainable governance models for platforms that balance commercial objectives with user well-being. [Method/Process] This study employs evolutionary game theory to model the dynamic interactions between social media platforms and boundedly rational users. This method is well-suited for analyzing how strategies co-evolve over time towards stable states. Based on literature from user behavior and platform economics, a game-theoretic model was developed. Numerical simulations in MATLAB analyzed evolutionary paths across four platform types (Instant Messaging, Public, Short Video, and Vertical Community), with the model calibrated against empirical typologies to investigate how key factors influence long-term outcomes. [Results/Conclusions] The simulation results reveal that the evolutionary path of the platform-user interaction system is highly sensitive to key parameters, ultimately converging to different evolutionarily stable strategies (ESS) under varying conditions. A principal finding is that a unilateral increase in algorithmic recommendation intensity by platforms, while potentially boosting short-term engagement, does not guarantee long-term benefits and may instead drive users towards non-hoarding strategies due to increased cognitive burden. Crucially, the reasonable regulation of recommendation intensity is identified as the key to achieving sustainable, positive interactions. Moderate algorithmic recommendations can effectively alleviate information overload, reduce the negative impacts of hoarding, enhance user experience and satisfaction, and ultimately increase long-term platform benefits, creating a win-win scenario. The study provides significant managerial implications, suggesting that platform operators should incorporate user well-being metrics into algorithm evaluation frameworks, moving beyond purely engagement-driven models. Differentiated governance strategies are recommended for various platform types, such as implementing intelligent filtering on instant messaging apps and content quality incentives on vertical communities. However, this study has limitations, primarily its assumption of user homogeneity, which overlooks the impact of individual differences in preferences and digital literacy. Future research should introduce user heterogeneity, explore multi-platform competition scenarios, and validate the model with empirical data to enhance its practical predictive power and application value.

  • GUOLimin, LIUYueru, FUYaming
    Journal of library and information science in agriculture. 2025, 37(12): 36-47. https://doi.org/10.13998/j.cnki.issn1002-1248.25-0562

    [Purpose/Significance] This paper examines the ongoing transformation of library information systems, shifting from platform-oriented architectures to agent-based ones, in the context of generative artificial intelligence. It argues that, although Integrated Library Systems (ILS) and Library Services Platforms (LSP) have improved workflow automation and resource management, they remain constrained by poor semantic understanding, restricted cross-system orchestration, and insufficient support for proactive, personalized services. Building on these observations, the paper proposes a transformation path in which existing ILS/LSP infrastructures are not discarded, but rather re-positioned as providers of capabilities within a broader ecosystem of generative intelligent agents. This provides libraries facing both legacy constraints and pressures for service innovation with a feasible evolution strategy. [Method/Process] The study first reviews service-level limitations of ILS and LSP through the lenses of interaction patterns, data openness, and intelligent service support, and distills typical pain points encountered in cataloging, circulation, reference services, and subject liaison work. On this basis, it constructs a graded capability model for generative intelligent agents that encompasses semantic perception, context modeling, goal-driven behavior, preference adaptation, and reflective evolution. It also discusses how different types of agents can be aligned with specific library roles and task granularities. The study then proposes a three-layer architecture consisting of a basic service layer, an agent coordination layer, and a semantic interaction layer. The bottom layer exposes atomic capabilities such as search, metadata editing, authentication, and logging; the middle layer orchestrates multiple agents via lightweight protocols and shared task states; and the top layer supports natural-language-driven interaction while maintaining semantic consistency and traceable reasoning paths. Finally, leveraging a "Library Assistant" prototype that integrates these components, the study designs and conducts experimental evaluations in bibliographic follow-up and recommendation scenarios, combining task-based user tests with qualitative feedback from librarians and domain experts. [Results/Conclusions] Experimental results indicate that the proposed architecture outperforms traditional models in terms of answer relevance, interaction fluency, and perceived service intelligence, particularly in multi-step information-seeking and follow-up recommendation tasks. At the same time, the study found that the mechanisms for long-term memory, cross-session user modeling, and explicit feedback loops were underdeveloped. This can lead to inconsistencies in sustained interactions and complex task chains. The paper concludes with a discussion of the design implications for the evolution of library systems, suggesting that future work should focus on trustworthy memory management, transparent agent coordination, and robust evaluation metrics. It also recommends the development of governance frameworks that jointly consider system performance, user experience, professional ethics, and institutional policy requirements together. In this way, the study provides both a conceptual blueprint and empirical evidence to guide the transition from platform-oriented systems to agent-based, generative AI-enabled library architectures.