Most Read
  • Published in last 1 year
  • In last 2 years
  • In last 3 years
  • All

Please wait a minute...
  • Select all
    |
  • DAI Xinwei, LI Feng
    Journal of library and information science in agriculture. 2025, 37(5): 86-101. https://doi.org/10.13998/j.cnki.issn1002-1248.25-0148

    [Purpose/Significance] Amid the global wave of digital transformation in education, artificial intelligence (AI) has emerged as a driving force behind Japanese educational reform, propelling the country's education system toward an "AI+" model. The "Approved Program for Mathematics,Data science and AI Smart Higher Education" (MDASH), led by the Japanese Ministry of Education, Culture, Sports, Science and Technology (MEXT), outlines a comprehensive framework for designing and implementing AI literacy (AIL) education in Japanese universities. MDASH not only reflects the Japanese strategic response to the AI-driven future, but also provides valuable theoretical references and practical guidance for enhancing AIL education in China. This study provides a detailed analysis of the "MDASH literacy-level" (MDASHL) curriculum model design, paying a particular attention to the model's modules and the mechanisms of interaction between them. It also examines the theoretical references from MDASHL review system to the AIL framework studies. The study proposes innovative implementation strategies for AIL education from unique perspectives, especially the "industry-academia integration" aspect. [Method/Process] Using internet research and literature analysis, starting with an exploration of Japanese national AI policy landscape, the study traces the evolution of Japanese AI policies and the contextual origins of the MDASH. It describes the objectives and philosophy of Japanese AIL education and delves into the theoretical underpinnings of the MDASHL curriculum model based on the mapping relationship between indicators of AIL frameworks and the components of the MDASHL review system. We select Hokuriku University, Wakayama University, Chiba University, and Kansai Univerisity as samples because they were approved by MDASHL and demonstrated exemplary effects. We introduce their subject curriculum design and specific teaching initiatives, identify the commonalities and unique characteristics of their AIL education, and further elaborate on their specific educational implementation pathways. [Results/Conclusions] The findings indicate that the Japanese MDASHL curriculum model is deeply rooted in the AIL frameworks. It summarizes five educational directions for Japanese AI literacy education: recognition, realization, comprehension, ethics, and practical operation. By comparing the current status of AIL education in China and Japan, the study found that Japanese AIL education has achieved rapid responsiveness and systematic development under the unified coordination of MEXT. It suggests that Japanese AI literacy education strategies have localized value, from which beneficial insights can be drawn in three areas: strategic planning, curriculum design, and industry-academia integration. These strategies provide innovative solutions for developing AIL education systems in Chinese universities. However, this study acknowledges limitations in the sample size. To comprehensively capture the full landscape of Japanese AIL education development, future research should expand the sample size, summarize its patterns and characteristics more thoroughly, and enhance the persuasiveness and generalizability of the findings.

  • SHEN Hongjie, SHEN Hongwei, WANG Junli
    Journal of library and information science in agriculture. 2025, 37(7): 50-60. https://doi.org/10.13998/j.cnki.issn1002-1248.25-0231

    [Purpose/Significance] In the digital era, information literacy has evolved from an academic skill into a fundamental competency that is essential for civic participation and lifelong learning. Traditional information literacy education in digital libraries is faced with significant challenges including the need for standardized content delivery, limited interactivity, high development costs, and insufficient user engagement. The rapid advancement of generative artificial intelligence (GenAI) technologies presents an unprecedented opportunity to transform information literacy education by leveraging powerful capabilities in natural language processing, personalized interaction, and content generation. This study represents a pioneering systematic exploration of how GenAI can be strategically integrated into digital library information literacy education, It addresses a critical gap in existing research, which primarily focuses on general educational applications rather than library-specific contexts. The research strengthens the theoretical basis of AI-enhanced library education and offers practical advice to institutions adopting innovative educational technologies while upholding quality and ethical standards. [Method/Process] This study employs a comprehensive mixed-method approach combining systematic literature review, theoretical analysis, and conceptual framework development. The methodology is grounded in well-established information literacy frameworks, particularly the ACRL Framework, which provides a foundation for breaking down information literacy education into five key components: information need identification, retrieval strategy development, resource evaluation, information management, and ethics education. A four-dimensional challenge analysis framework was constructed encompassing content quality and credibility, pedagogical methods and learning outcomes, ethics and social equity, and operational considerations. The research synthesizes evidence from emerging AI-enhanced education practices, preliminary library applications, and educational technology literature to develop comprehensive application pathways and strategic responses. [Results/Conclusions] The research identifies specific GenAI integration pathways across the complete information literacy process. Applications include intelligent dialogue guidance for need identification, simulated training environments for retrieval skills, controlled assessment materials for evaluation practice, and interactive ethical scenario simulations. Four primary challenge categories are revealed: content quality issues including AI hallucination and embedded biases; pedagogical challenges such as over-dependence risks and assessment complexity; ethical concerns encompassing data privacy and algorithmic discrimination; and operational challenges including implementation costs and staff capability requirements. Strategic responses include human-AI collaborative review mechanisms, process-oriented task design emphasizing critical thinking, transparent ethical governance frameworks, and comprehensive staff development initiatives. The study emphasizes librarian role transformation toward learning facilitators, AI literacy educators, and ethics advocates. Despite contributions, limitations include reliance on theoretical analysis rather than empirical validation and insufficient attention to user group heterogeneity. To ensure equitable and effective AI-enhanced information literacy education, future research should prioritize empirical outcome studies, case studies of pioneering implementations, and development of library-specific AI tools.

  • YANGXuejie, LIUJia, WUQingxiao, WANGYufei, GUDongxiao
    Journal of library and information science in agriculture. 2025, 37(4): 24-38. https://doi.org/10.13998/j.cnki.issn1002-1248.25-0079

    [Purpose/Significance] Against the backdrop of an accelerating population aging trend, the integration of big data and intelligent services in multimodal healthcare and eldercare has become pivotal for enhancing the quality of medical and eldercare services. However, existing knowledge service systems for big data in healthcare and eldercare face challenges such as difficulty of integrating multi-source heterogeneous data, the absence of cross-organizational sharing mechanisms, and passive service models. [Method/Process] First, a cross-domain aggregation method is proposed for multi-source heterogeneous medicare big data, including: 1) A method for constructing a clinical, key-feature-based medical case knowledge database. It extracts and categorizes critical features from electronic medical records using natural language processing (NLP). 2) A natural language processing-based cross-domain disease risk factor mining framework. It identifies risk factors from social media via topic-enhanced word embeddings and clustering techniques. 3) An adaptive pointer-constrained generation method for medical text-to-table tasks. It leverages the BART architecture to transform unstructured medical text into structured tables. Next, a knowledge discovery method based on multimodal medicare big data is developed, including: 1) A medical decision support approach integrating case-based reasoning (CBR) and explainable machine learning. It aims to enhance diagnostic interpretability through ensemble learning and case similarity analysis. 2) A large-scale medical model-driven knowledge system. It utilizes multimodal data pretraining and domain adaptation to support the entire diagnosis-treatment process. 3) A personalized recommendation method based on temporal warning signals, generating precise intervention plans via collaborative filtering and dynamic updates. Finally, a smart service model for full-cycle evolving needs is constructed, including: 1) A health information supply-demand consistency matching framework combining deep learning and clustering techniques; 2) A multi-level, cross-scenario health demand and behavior dynamic modeling approach. [Results/Conclusions] The proposed methodological framework significantly improves the efficiency with which medicare big data is integrated and the capabilities of its knowledge services. Key outcomes include: 1) Enabling disease risk prediction and personalized interventions through deep integration of cross-organizational, cross-scenario medicare data via multimodal aggregation and semantic alignment. 2) The CBR-ECC model and WiNGPT large medical models enhance the interpretability and full-process coverage of medical decision-making. These models improve the accuracy of diagnoses made by primary care physicians by over 30%. 3) The temporal warning-based recommendation method increases the dynamic update efficiency of health interventions by 40% and user satisfaction by 25%; 4) Dynamic health demand modeling reveals core pain points for chronic disease patients, providing a basis for precision service strategies. This research provides the theoretical and technical support for developing a proactive health service system that is both data-driven and human-machine collaborative. This system will, advance the implementation of the Healthy China strategy and innovation in aging population governance.

  • SHENMengcheng, CHENXiuping
    Journal of library and information science in agriculture. 2025, 37(4): 66-82. https://doi.org/10.13998/j.cnki.issn1002-1248.25-0190

    [Purpose/Significance] Integrating culture and tourism is key to promoting rural revitalization. Constructing a scientific evaluation system and exploring differentiated development paths are core issues for achieving rural cultural and tourism development. [Method/Process] Taking 26 mountainous counties in Zhejiang Province as the research object, a research framework incorporating multi-dimensional analysis methods was constructed based on user-generated content data from tourism platforms. First, travel journal texts were collected to build a cultural and tourism integration database. Second, the BERTopic model was used to identify the potential thematic elements in tourists' narratives. Third, sentiment analysis was applied to quantify the emotional value of different themes in each county. Finally, fuzzy-set qualitative comparative analysis (fsQCA) was employed to reveal the complex development paths of cultural and tourism integration. [Results/Conclusions] 1) Through global topic modeling, eight themes of cultural and tourism integration were identified in 26 mountainous counties in Zhejiang Province, including ecological landscapes, traditional settlements, food culture, transportation, cultural activities, cultural heritage and arts, accommodation facilities, and leisure industries. These eight themes are summarized into four conceptual categories of cultural and tourism integration: natural experience dimension, cultural experience dimension, service support dimension, and leisure consumption dimension. 2) Tourists generally hold a positive attitude towards cultural and tourism experiences in different counties. The natural and cultural experience dimensions are highly regarded, but the service support dimension shows uneven levels, and the leisure consumption dimension displays significant differences, hindering the improvement of the quality of cultural and tourism integration in mountainous counties. Ecological landscapes and traditional settlements constitute the core layer of tourism experiences, while accommodation and leisure business formats, as potential directions, still require further development and activation. The comprehensive performance of cultural and tourism integration in various counties presents a notable spatial pattern of "two poles in the north and south, a developing middle region, and relatively weak coastal areas". 3) Through configurational path analysis, the development paths of cultural and tourism integration in the 26 mountainous counties can be summarized into six configuration paths, including the "ecological landscape + traditional settlements + food culture + transportation" model, the "cultural heritage and arts + leisure industries + food culture" model, the "traditional settlements + food culture" model, the "ecological landscape + cultural activities + transportation" model, the "ecological landscape + cultural heritage and arts + leisure industries + food culture" model, and the "cultural heritage and arts + leisure industries + food culture" model. These paths differ significantly in their combinations of core elements, reflecting the differentiated development strategies of different regions in terms of resource endowment, cultural characteristics, and market positioning. Five of the paths highlight food culture as a core condition, reflecting its foundational role in cultural and tourism integration. Cultural heritage and arts and leisure industries jointly form the core conditions in three successful paths, highlighting the importance of combining in-depth cultural experiences with leisure activities. Accommodation facilities often appear as missing or marginal conditions, indicating that short-term stays are the main form of rural cultural and tourism.

  • ZHANG Weichong, XU Chen, ZHU Yiran
    Journal of library and information science in agriculture. 2025, 37(5): 72-85. https://doi.org/10.13998/j.cnki.issn1002-1248.25-0325

    [Purpose/Significance] As digital government accelerates, the artificial intelligence (AI) literacy of grassroots civil servants has become critical to promoting smart government management. Grassroots-level civil servants who possess high levels of digital and AI literacy are indispensable drivers in establishing a digital and smart government. However, significant differences among grassroots civil servants in AI literacy and digital skills adaptation make it difficult for them to fully adapt to the requirements of smart government management. To effectively apply AI technologies in grassroots governance, it is essential for the effective application of AI technologies in grassroots governance to systematically identify its influencing factors and propose targeted cultivation paths, thereby improving public service quality and governance efficiency. [Method/Process] This study integrates the Technology Acceptance Model (TAM) and Innovation Diffusion Theory (IDT) to construct a TAM-IDT analytical framework. Based on empirical research identifying the AI literacy deficiencies of current grassroots civil servants, the TAM-IDT analytical framework systematically examines the impact mechanisms of key variables, perceived usefulness, perceived ease of use, and behavioral attitude, on AI literacy. The framework also proposes stage-based and group-specific cultivation strategies. The study uses local government civil servants as its research sample. It collects data through questionnaires and interviews, and employs structural equation modeling and mediation effect analysis for empirical validation. [Results/Conclusions] The findings reveal that behavioral attitude has a significant positive impact on AI literacy. Perceived usefulness notably enhances behavioral intention, while perceived ease of use has a negative effect on behavioral attitude, suggesting that individuals who perceive greater difficulty may be more motivated to learn. However, one of the highlights of this study is that civil servants who are proficient in AI technology or have used it in their work have a lower desire to learn more about it. Further analysis shows that perceived ease of use positively influences behavioral attitude indirectly through perceived usefulness. Additionally, both cognitive variables indirectly affect AI literacy via behavioral attitude, forming a "cognition-intention-behavior" influence chain. Based on these results and the classification of stages and types of technology adoption using Innovation Diffusion Theory (IDT), a three-dimensional, differentiated AI literacy cultivation strategy called "perception diffusion collaboration" was proposed. This strategy is based on the five elements, stages, and the groups of people involved in innovation diffusion. It offers a theoretical foundation and practical path for improving AI literacy among grassroots civil servants and advancing the modernization of grassroots governance.

  • LIU Wei, ZHANG Lei, JI Ting, CHEN Xiaoyang
    Journal of library and information science in agriculture. 2025, 37(5): 15-26. https://doi.org/10.13998/j.cnki.issn1002-1248.25-0379

    [Purpose/Significance] In the era of cloud computing, the Library Services Platform (LSP) failed to become a unified solution for libraries it promised to be. Now, it faces new development bottlenecks in the era of smart libraries. Its relatively rigid architecture, isolated data models, and limited intelligence level make it difficult to meet modern users' urgent demands for access to new resource ecosystems and proactive services. This limitation stems from the fact that existing LSPs are rooted in a resource management design philosophy. They lack native support for intelligence, personalization, and ecosystem integration, which hinders their ability to serve as a core component in the construction of smart libraries. [Method/Process] The rapid development of large language model (LLM) technology is promoting libraries to transition from digital intelligent phases into a new era of intelligent services. As AI agents are increasingly emerge as a core strategy for LLM applications, this paper proposes a next-generation LSP architecture called A-LSP, which is agent-oriented. The core of A-LSP consists of a three-layer logical model. 1) Layer 1: Compatibility & Tools - MCP Marketplace, serving as the foundation of the platform, this layer bridges the agent ecosystem with the external world. It transforms existing heterogeneous library systems (including legacy LSPs) and external tools into invocable "capability units" for agents through standardized protocols. 2) Layer 2: Orchestration & Intelligence-Agent Middleware. Functioning as the platform's "operating system" and "brain," this layer handles agent lifecycle management, task planning and decomposition, state and memory maintenance, and most crucially, the coordination of multi-agent collaboration. 3) Layer 3: Application & Ecosystem - Agent Marketplace. This functional layer serves users and developers, where various reusable agents encapsulating specific business logic are published, discovered, combined, and invoked, creating a rich application ecosystem. This architecture enables the implementation of new platform strategies without replacing legacy systems, establishing a modern technological platform with endogenous intelligence, inclusive compatibility, and an open ecosystem. This agent-based library service platform can be seen as a significant upgrade to existing LSPs, it drives their transformation from resource management-centric to agent service-centric, establishing itself as the library service platform for the AI era. [Results/Conclusions] Moreover, this paper puts forward a "Five Centers" construction demand framework for future libraries, namely, the Smart Resource Center, Smart Service Center, Smart Learning Center, Smart Scholarly Communication Center, and Smart Cultural Heritage Center, to build a blueprint for the integration of library technology and business. For each center, it delineates a representative complex application scenario and analyzes the underlying multi-agent collaboration processes, thereby clearly demonstrating A-LSP's deep integration with each center's operational logic and illuminating its profound impact on future library service models.

  • ZENGJianxun, LINXin, SHIYu, ZHAMengjuan, YANGYanni
    Journal of library and information science in agriculture. 2025, 37(4): 4-11. https://doi.org/10.13998/j.cnki.issn1002-1248.25-0291

    [Purpose/Significance] Unclassified sensitive information, a special type of information, that falls between state secrets and public information, may threaten national scientific and technological security if it is not managed properly. This study aims to clarify the conceptual connotation and management value of unclassified sensitive information. Additionally, it seeks to improve China's information security protection system by promoting the country's "national unclassified sensitive information management system". [Method/Process] First, based on literature research and content analysis, the characteristics and attributes of unclassified sensitive information and the necessity of its management are systematically explained. Second, the management experience of controlled unclassified information (CUI) in the United States is analyzed by means of case studies. These studies include the creation of special laws and regulations to unify the CUI management, the formation of specialized management institutions to create a complete organizational structure, the development of a CUI management system, and the establishment of CUI classification and labeling. Finally, it proposes strategies to promote the development of China's sensitive information management system, given the current state of management situation in China, with the aim of promoting the safe and efficient management of public information resources. [Results/Conclusions] China needs to strengthen the top-level design of its management system for unclassified sensitive information. First, China should establish a policy and standard system for managing unclassified sensitive information, and clarify the rules for defining and controlling it. Second, the organizational and institutional structure should be improved for the management of unclassified sensitive information, and strengthening the review of information disclosure. Third, efforts should be made to develop a management system for registering directories and labels of sensitive information, and to realize standardized and dynamic management of directories of sensitive information. Fourth, the dissemination, use, training and management check should be strengthened, and thus long-term and healthy management of unclassified sensitive information can be promoted. Fifth, a registration directory and labeling management system for sensitive information should be developed so as to achieve standardized and dynamic management of the sensitive information directory. Finally, efforts should be made to strengthen the dissemination and use of sensitive information, as well as training and management inspections, so as to promote the long-term and healthy management of unclassified sensitive information.

  • CUIShaojie, LIUYanping
    Journal of library and information science in agriculture. 2025, 37(4): 39-50. https://doi.org/10.13998/j.cnki.issn1002-1248.25-0201

    [Purpose/Significance] The advent of the digital era has presented China with significant opportunities to modernize rural governance. Digital literacy is crucial for enabling farmers to participate in rural governance and promote the development of digital villages. Rural residents are direct participants in digital village development, and their digital competence fundamentally determines the modernization level of rural governance. Their proficiency in digital skills affects not only the effectiveness of intelligent rural management, but also serves as a key indicator for measuring the progress of digital rural development. [Method/Process] Based on a literature review and sociopolitical reports, this study combines research objectives and thematic focus to create a questionnaire that investigates farmers' digital competence and rural governance performance. A stratified sampling strategy was implemented to select 306 rural households from diversified villages across township and sub-district jurisdictions, in Xia County, Shanxi Province, for the case study. The dataset encompasses respondents' demographic attributes, familial characteristics, digital proficiency metrics, and multidimensional indicators of rural governance efficiency and capacity building. Through integrated application of exploratory factor analysis (EFA) and multivariate linear regression modeling, this investigation systematically examines the determinants through which digital literacy influences governance outcomes. Together, these approaches establish a theoretical framework and evidence-based pathways for enhancing rural digital transformation initiatives. [Results/Conclusions] The empirical analysis results indicate that farmers' digital literacy significantly impacts rural governance efficacy. Specifically, improvements in the four dimensions of digital literacy - digital awareness, digital skills, digital application, and digital security - positively influence the efficacy of rural governance.In other words, the higher the digital literacy level of farmers, the greater the enhancement in rural governance efficacy. Among these, digital security literacy has the most significant effect on improving governance efficacy. Next is digital application literacy, followed by digital awareness literacy. Digital skill literacy exhibits a relatively weaker impact. Given the significant positive influence of digital literacy on rural governance efficacy, this paper proposes recommendations from three perspectives: strengthening farmers' proactive awareness of digital literacy, enhancing their knowledge in digital literacy, and improving the digital infrastructure construction in rural areas. These suggestions provide practical references for the digital development of rural governance in Xia County, Shanxi Province. Due to various constraints, however, the study only examined Xia County in Shanxi Province as the research area, resulting in notable geographical limitations in the sample. This is because rural regions in different areas exhibit significant disparities in economic development levels, cultural traditions, and policy support, all of which may affect farmers' digital literacy and the efficacy of rural governance. Consequently, the conclusions of this study may not accurately reflect the actual conditions in rural areas across diverse regions. To improve the generalizability of the findings, future research should expand the sample selection to include more representative areas.

  • ZHAO Yajing
    Journal of library and information science in agriculture. 2025, 37(6): 70-86. https://doi.org/10.13998/j.cnki.issn1002-1248.25-0290

    [Purpose/Significance] This study focuses on the participation behavior of users prone to depression who participate in user-generated content (UGC) platforms, aiming to explore their behavioral heterogeneity and the underlying influencing mechanisms. The research aims to expand the theoretical scope of studies on user behavior while providing UGC platforms with practical guidance on building differentiated user care models and refining operational strategies. By utilizing authentic user-generated content as the data foundation, this study addresses the representational limitations commonly associated with traditional small-sample approaches, such as surveys and interviews. It introduces a data-driven perspective and methodological innovation to the field of information behavior research. Furthermore, this study enhances the understanding of varying psychological and behavioral needs among different types of depression-prone users. The findings can assist platforms in optimizing user experience, improving emotional support systems within online communities, and informing the development of more targeted and responsive intervention strategies. [Method/Process] First, web scraping techniques were used to collect a large volume of depression-related posts from the Xiaohongshu platform as the primary data source. Second, representative keywords were extracted through Word2Vec and K-means clustering algorithms. A keyword co-occurrence network was then constructed using the Leiden clustering algorithm to identify semantic relationships. By integrating user attribute information, the study achieved a fine-grained classification of heterogeneous depression-prone user groups. Third, drawing on self-determination theory (SDT) and the technology acceptance model (TAM), and leveraging BERTopic for advanced topic modeling, the study constructed a comprehensive factor model to examine the mechanisms influencing user participation behavior in depth. [Results/Conclusions] The research identifies three distinct types of depression-prone users: adolescent depression expression, help-seeking expression, and emotional breakdown expression. Results indicate that posting and commenting behaviors across these groups are primarily driven by emotional needs and environmental factors. Emotional needs are the dominant motivator for active participation, while environmental influences significantly contribute to triggering interaction, especially within comment sections. Additionally, adolescent depression expression and emotional breakdown expression show stronger tendencies toward self-related needs, reflecting deeper emotional and identity concerns. In contrast, help-seeking expression exhibit more evident competence-related needs, focusing on practical advice and problem-solving. Although competence and technical factors account for a smaller proportion, they still play a meaningful supporting role in shaping the structure and substance of user participation behavior on UGC platforms.

  • TANMiao, DAIMengfei
    Journal of library and information science in agriculture. 2025, 37(4): 83-93. https://doi.org/10.13998/j.cnki.issn1002-1248.25-0217

    [Purpose/Significance] With the growing demand for intelligent cultural services, libraries are seeking innovative approaches to enhance access to and engagement with historical literature. Generative AI presents promising opportunities for transforming digital reading services, particularly in processing, interpreting, and promoting complex historical documents. This study investigates the integration of generative AI into library-based historical literature promotion, aiming to address persistent access limitations, foster more interactive user experiences, and optimize the depth and breadth of reading engagement. [Method/Process] This research adopts a multi-method approach combining literature review, comparative platform observation, and empirical implementation practice. The study focuses primarily on Shanghai Library's historical digital collections and AI-enabled services. It develops a structured three-layered implementation framework encompassing the data layer, application layer, and service layer-each mapped to corresponding technical and operational phases of digital reading promotion. Within this architecture, a six-step service pathway is articulated: demand analysis, activity planning, content mining, multimodal interaction, content review, and intelligent recommendation. Extensive practical experimentation is conducted across these stages. Key innovations include the application of Retrieval-Augmented Generation (RAG) to support complex historical document Q&A; the use of multimodal creative tools (e.g., Midjourney) to generate engaging visual materials; implementation of voice-based AI interactions to improve accessibility for diverse user groups; and the deployment of dynamic content management modules for librarians to curate and monitor AI-generated materials. Additionally, backend tools such as user profiling dashboards, personalized push notification systems, and topic-based knowledge repositories are developed and tested to enhance librarians' ability to deliver targeted and data-driven reading promotions. [Results/Conclusions] The findings demonstrate that generative AI significantly enhances the efficiency, precision, and user engagement levels of historical literature services. AI-driven methods substantially improve OCR accuracy, streamline metadata generation, facilitate both visual and semantic content creation, and enable real-time interactive services via natural language interfaces. These advancements contribute to a more immersive and responsive digital reading experience. However, several challenges persist, including limited availability of domain-specific training data, the ongoing risk of AI-generated content inaccuracies (hallucinations), and unresolved intellectual property considerations. The study emphasizes the importance of developing domain-specific large language models, establishing expert-assisted validation mechanisms, and formulating clear legal and ethical guidelines for AI-generated content in the library context. While the prototype platform developed in this research exhibits notable gains in user engagement and librarian workflow support, its long-term sustainability hinges on fostering cross-institutional resource collaboration, advancing supportive policy frameworks, and embedding robust ethical safeguards. Future research directions include the exploration of adaptive AI training systems incorporating user feedback loops, integration of cross-library data resources, and the enhancement of multilingual AI capabilities to better serve diverse and global user communities.

  • SHI Xujie, YUAN Fan, LI Jia
    Journal of library and information science in agriculture. 2025, 37(5): 40-57. https://doi.org/10.13998/j.cnki.issn1002-1248.25-0274

    [Purpose/Significance] This paper investigates how generative artificial intelligence (GenAI) is reshaping the Searching as Learning (SAL) paradigm, focusing on its implications, challenges, and prospects in Library and Information Science (LIS). Traditional SAL emphasizes the cognitive and metacognitive processes by which users acquire and construct knowledge through information retrieval. However, the advent of GenAI - especially large language models (LLMs) - introduces a transformative shift from keyword-based querying to dynamic, dialogic, and multimodal interactions. This study aims to clarify the conceptual and practical significance of GenAI-driven SAL, explore its technical trajectories, and evaluate its impact on learners' behavior, learning strategies, and information literacy. It also highlights the emerging ethical and epistemological challenges posed by GenAI systems in learning-oriented search contexts. [Method/Process] Using the PRISMA-ScR framework, this study conducted a scoping review of academic and gray literature published between January 2023 and May 2025. A total of 1 681 records were retrieved from major academic databases and preprint repositories. After screening titles, abstracts, and full texts, 22 studies were selected for in-depth qualitative analysis. Thematic coding and synthesis were conducted to extract recurring patterns and theoretical insights across three key dimensions: GenAI-enhanced search technologies, evolving user behaviors in SAL contexts, and normative concerns associated with credibility, agency, and transparency. The analysis was grounded in LIS theories, including information behavior, metacognitive models of learning, and digital/information literacy frameworks. [Results/Conclusions] The results reveal that GenAI is fundamentally reshaping SAL in three key areas. First, in terms of technology, GenAI systems (e.g., GPT-based chat interfaces) provide conversational, context-aware, and multimodal assistance, transforming SAL from reactive searching to proactive co-learning. These systems scaffold learning through adaptive query reformulation, real-time content summarization, and source triangulations supporting iterative reflection and cognitive engagement. Such affordances mirror the functions traditionally associated with human tutors, thereby expanding learners' capacity for critical inquiry and self-directed exploration. Second, user behaviors in SAL are undergoing a paradigm shift. Learners increasingly engage in human-AI co-construction of knowledge, participating in iterative query-dialogue loops that facilitate concept clarification and knowledge synthesis. While this enhances engagement, personalization, and perceived learning efficiency, it also raises concerns. Over-reliance on AI-generated content may undermine learners' critical thinking, reduce information discernment, and promote passive consumption. The study identifies a dual effect. While GenAI augments higher-order thinking and strategic learning, it can also lead to superficial comprehension when learners lack the skills to critically evaluate AI output. Third, the review underscores the urgency of addressing ethical and pedagogical challenges. Issues such as AI hallucination, algorithmic opacity, and biased content threaten the credibility of GenAI-enhanced learning environments. From an LIS perspective, this necessitates a reconfiguration of information literacy education to include AI literacy. Students must be equipped not only to retrieve and evaluate information, but also to interrogate algorithmic sources, verify provenance, and triangulate AI outputs with authoritative references. GenAI should be positioned as a cognitive assistant, not a definitive knowledge authority. GenAI holds substantial promise in enhancing SAL through greater interactivity, personalization, and cognitive scaffolding. However, these benefits must be balanced with informed practices that mitigate risks to learner autonomy, critical reasoning, and information ethics. This work establishes an analytical foundation for future research and practices at the intersection of AI, learning, and information behavior.

  • ZHANGTao, LYUQianhui
    Journal of library and information science in agriculture. 2025, 37(4): 12-23. https://doi.org/10.13998/j.cnki.issn1002-1248.25-0167

    [Purpose/Significance] Generative artificial intelligence (GAI) is currently advancing at an astonishing pace. GAI has unleashed remarkable potential in various fields and is significantly fueling social and economic development. However, this rapid progress has also given rise to a plethora of complex issues, including but not limited to data security breaches, privacy violations, the spread of false information, and intellectual property infringements. Existing research primarily focuses on the governance of AI in general, leaving a gap in in-depth exploration of GAI. This study aims to fill this void by meticulously comparing the governance approaches of Europe and the United States in the realm of GAI. Through this comparison, the study aims to provide valuable insights for China to refine its own governance system. This is not only crucial for China's domestic technological development and social stability but also plays a pivotal role in promoting the harmonization of the global governance framework for GAI. [Method/Process] This research adopts a multi-faceted approach. It commences with a comprehensive review of relevant literature, gathering insights from a wide range of academic sources to understand the current state-of-the-art in GAI governance in Europe and the United States. Additionally, it deploys the case-study method, examining real-world examples such as the development of OpenAI's GPT series in the US and the implementation of the EU's AI Act. By analyzing these cases, it can vividly illustrate the practical implications and impacts of different governance strategies, thus enabling a more in-depth and accurate comparison. [Results/Conclusions] We found that the European Union adopts a regulatory path centered on data protection and ensures the fairness and sustainability of technological development through a strict legal framework. However, this strong regulatory model may stifle innovation vitality to some extent. The United States adopts a governance model oriented towards market accountability, emphasizing technological innovation leadership and free development. It stimulates market vitality through industry self-discipline and flexible regulation, but there is a hidden danger of insufficient ethical risk control. Based on these findings, this paper recommends that China adopt a balanced approach. China should integrate elements of both the U.S. and E.U. models to foster innovation while ensuring ethical and legal compliance. Future research could explore ways to adapt these governance models to emerging trends such as integrating GAI with other emerging technologies and addressing the unique governance challenges posed by cross-border data flows.

  • ZHANG Tao, WU Sihang
    Journal of library and information science in agriculture. 2025, 37(7): 91-105. https://doi.org/10.13998/j.cnki.issn1002-1248.25-0399

    [Purpose/Significance] This study addresses the "motivation black box" problem. By integrating achievement goal theory and technology acceptance models, it aims to construct a four-dimensional "motivation-identity-cognition-engagement" theoretical framework to analyze the driving mechanisms underlying AI teaching assistant usage behavior. [Method/Process] A questionnaire survey was utilized in this study. The Chaoxing Learning platform served as the research context, and college students who use AI teaching assistants constitute the research subjects. The chain mediating effect between technical identity recognition and technical acceptance was tested using the structural equation modeling (SEM). The significance of the pathways was verified via the Bootstrap sampling method. Data analysis was performed using SPSS 26.0 and Smart PLS 3.3.9 software. [Results/Conclusions] Key findings reveal that within the learning environment integrating Chaoxing's online courses with AI teaching assistants, achievement goal orientations demonstrated significant divergence, with mastery-approach goals (MAP) emerging as the sole significant driver - other goal orientations showed no statistically reliable predictive effects. Crucially, MAP significantly promoted dependent (β=0.308), critical (β=0.262), and exploratory (β=0.244) usage behaviors through the "technology identity recognition → technology acceptance" chain-mediation pathway. Furthermore, technology identity recognition exhibited dual mediation dominance in behavior formation, as this chain-mediation pathway accounted for more than 50% of total effects across all three usage behaviors, particularly for dependent and exploratory usage. Notably, technology identity recognition demonstrated the strongest mediation effect specifically on dependent behaviors (β=0.418). Further analysis indicates MAP's total effect on technology identity recognition substantially exceeded its direct effect on technology acceptance. This critical finding aligns with Deci and Ryan's self-determination theory, confirming that intrinsic motivation (exemplified by MAP) facilitates deeper skill internalization. Specifically, students focused on competence development showed greater tendency to integrate AI skills into their self-concept (e.g., perceiving themselves as "technology-proficient learners") rather than viewing them merely as external tools - a mechanism that empirically explains why traditional technical training that emphasizes operational skills often fails to foster sustained usage. Most significantly, this research provides important implications for educators in guiding students' use of AI teaching assistants: they should prioritize cultivating students' mastery-approach goals (MAP) through instructional design that strengthens students' pursuit of knowledge. Such an approach enhances the effectiveness of AI tools in teaching while simultaneously offering direction for the Chaoxing Learning Platform to optimize its AI teaching assistant features. Specifically, the platform should enhance personalized learning support tailored to the needs of MAP-oriented users, thereby better aligning with students' intrinsic learning motivations.

  • XIAOKeyi, LIYunfan, CHENYingying, PENGXi
    Journal of library and information science in agriculture. 2025, 37(4): 51-65. https://doi.org/10.13998/j.cnki.issn1002-1248.25-0195

    [Purpose/Significance] With the rapid advancement of the information society and the ongoing construction of smart cities, public libraries are facing increasing pressure to transition into a smart service model. Smart services leverage cutting-edge technologies to enhance user experiences and improve the efficiency of library services. Public libraries in China, however, are encountering challenges such as mismatched service offerings, unsatisfactory user experiences, and inadequate technological implementation as they move toward a smart service model. It is crucial to identify how to optimize this transition in a manner that prioritizes user needs, ensuring that smart library services meet the demands of a diverse user base. This research aims to explore the dynamic relationships among users, technology, content, and the service environment in public library smart services, thereby promoting innovation and addressing diverse user requirements. [Method/Process] The study develops a model to analyze the influence of various factors on the user experience of smart services in public libraries. Adopting Actor-Network Theory (ANT) constructs an integrated framework for understanding the interactions between various actors in the smart service ecosystem. By combining both online and offline surveys, the research captures library users' perceptions of their smart service experiences and identifies the critical factors that influence user experience and provides valuable data support and strategic recommendations for optimizing smart library services.Principal component analysis is used to identify the key factors affecting user experiences. [Results/Conclusions] The findings show that the core factors influencing user experience in smart services include: "advanced technology support," "network compatibility and flexibility," and "convenient communication channels" within the "library technology actors" dimension; "usability, operability, clarity, and comfort of portal browsing" within the "interaction between human-technology actors" dimension; "effort expectancy, information literacy, and time-energy consumption" within the "user (human actor)" dimension; "professionalism and competence of library staff" within the "librarian (human actor)" dimension; and "social influence and facilitating conditions" within the "interaction between human actors" dimension. These factors have a positive impact on the user experience, with particular attention required for the factors related to the "technology actors" dimension. Libraries need to focus on improving three factors in this area while maintaining and further optimizing the other factors. "Human-technology interaction" activities are crucial in improving the usability and user-friendliness of smart services, especially in more complex technological settings. Social influence and enabling conditions play an important role in enhancing user trust and their overall experience. Based on the empirical findings, the study proposes optimization strategies for public library smart services from three dimensions: the technical actors of the smart service system, the "human-technology actors" interactions, and the "translation" activities among human actors. These strategies include enhancing multi-dimensional collaboration among technical actors in the smart service system, improving the sensory experience of users with the smart service terminals in libraries by increasing their ease of use, empowering digital literacy, and optimizing innovation spaces to drive bidirectional reader participation. The aim is to provide a specific guide for the design and optimization of smart library services.

  • LIUHan
    Journal of library and information science in agriculture. 2025, 37(4): 94-107. https://doi.org/10.13998/j.cnki.issn1002-1248.25-0170

    [Purpose/Significance] The confluence of digital transformation and the Fourth Industrial Revolution has driven the emergence of data science as an interdisciplinary field. Data science leverages structured and unstructured data to discover knowledge and support decision-making, thereby reshaping research paradigms in information science, computer science, and the social sciences. This study focuses on the development of data science degree programs within member institutions of the iSchools Consortium, a global alliance of information schools. Through systematic empirical investigation, the study aims to unveil innovative features in the program's disciplinary positioning, curriculum architecture, and talent cultivation models. This research aims to inform global information science education institutions on how to optimize their disciplinary strategies and curricular designs for data science. Ultimately, this will address the challenges of knowledge system reconstruction and talent development iteration within the traditional library and information science (LIS) discipline amid its digital transformation. [Method/Process] This study employed a web-based survey and content analysis methodology to create a multi-dimensional analytical framework based on the 2023 iSchools Consortium membership directory. Using a stratified sampling approach that integrated disciplinary influence, as measured by the QS World University Rankings, and program maturity indicators, including curriculum comprehensiveness and industry partnership networks, eight representative U.S. higher education institutions were selected as core samples. A systematic empirical investigation was conducted to thoroughly analyze the current landscape of data science degree programs. The study focused on four critical dimensions: 1) degree-awarding structures such as degree types, concentration specializations, and accreditation standards; 2) credentialing ecosystems such as micro-credentials, stackable certificates, and non-degree pathways; 3) curricular architectures such as core course clusters, elective modules, and interdisciplinary integration mechanisms; and 4) career trajectory outcomes, such as sectoral distribution, occupational roles, and industry-specific skill premiums. [Results/Conclusions] The study summarizes the current state of data science discipline education in international iSchools from several perspectives, including the characteristics of degree program offerings, the reconstruction of disciplinary positioning, pathways for curriculum integration, and insights into employment trends. Based on this, it makes recommendations for developing China's domestic data science discipline. These recommendations include optimizing the disciplinary layout, innovating the curriculum system, and deepening industry-education integration. However, it should be noted that this research is constrained by its small sample size of eight institutions and its geographical scope, which is limited to the United States. In the future, the study could expand to encompass members of the European iSchools consortium, such as the iSchool at University College London and the iSchool at Humboldt University in Berlin, as well as emerging data science programs in the Asia-Pacific region. Through cross-national comparative analysis, it aims to reveal how culture, policies, and industrial ecosystems impact disciplinary development differently. Furthermore, the study could incorporate Learning Analytics technology to model learner behavior in data science courses offered on MOOC platforms, such as Coursera and edX. This would facilitate the refinement of course module granularity and adaptability to better meet learners' needs.

  • XIAOQinghua
    Journal of library and information science in agriculture. 2025, 37(8): 50-60. https://doi.org/10.13998/j.cnki.issn1002-1248.25-0165

    [Purpose/Significance] This study aims to explore the reading difficulties experienced by children in intergenerational caregiving situations in rural China, analyze the causes of these difficulties, and propose targeted solutions. The research is motivated by the growing concerns about educational disparities and developmental challenges experienced by this vulnerable group, especially within the context of China's rural revitalization strategy. Unlike previous studies, which have primarily focused on the broader category of rural left-behind children, this paper focuses on a specific subgroup-rural children raised by their grandparents - to offer a more nuanced understanding of the unique obstacles these children face in relation to reading. This study contributes to both academic discourse on rural education and efforts aimed at promoting equitable development by identifying the structural and cultural factors that contribute to low reading literacy among these children. By integrating theories of family sociology, educational inequality, and digital divide, it fills a critical gap in existing literature and offers new insights into how intergenerational caregiving intersects with literacy development. [Method/Process] The research was conducted in a rural county located in Guangdong Province. A mixed-methods approach was adopted that combined in-depth interviews with caregivers and teachers, a textual analysis of local education policies, and online surveys of rural schools and community centers. A grounded theory approach was employed as the analytical framework, and a three-stage coding process was used to develop a measurement model for assessing individual reading barriers. This methodological rigor ensured that the findings were grounded in empirical data, yet still allowed for theoretical generalization. [Results/Conclusions] The findings reveal that rural children under intergenerational care face multiple reading challenges, including limited access to books, inadequate reading environments, and a lack of awareness about the importance of reading. These issues stem from complex sociostructural factors, including fragmented family structures, limited educational opportunities for grandparents, and imbalanced use of digital technologies. To address these challenges, the study proposes a multi-pronged intervention framework. This framework includes strengthening policy support for rural reading programs, mobilizing volunteers as reading mentors, guiding the appropriate use of digital tools to enhance literacy, and encouraging intergenerational reading activities within families. While this study provides valuable insights, further longitudinal and comparative research across diverse rural regions is needed to validate and expand upon these findings. Future studies could also examine the long-term impact of reading interventions on children's academic achievement and psychosocial development.

  • SONGYaping, LIANKangping
    Journal of library and information science in agriculture. 2025, 37(8): 92-103. https://doi.org/10.13998/j.cnki.issn1002-1248.25-0402

    [Purpose/Significance] Study tours, as a dynamic integration of education and tourism, represent a significant opportunity for public libraries to innovate their service models and enhance their cultural and educational roles within the framework of cultural tourism integration. This study explores the pathways for public libraries to develop high-quality study tours, which addresses the growing demand for diverse, high-quality social education in the context of China's "cultural confidence" and "national reading" initiatives. Unlike prior studies that focus narrowly on specific library practices or regional cases, this research provides a comprehensive analysis by integrating domestic and international perspectives, we emphasize the strategic role of public libraries in cultural resource transformation. This paper contributes to the academic discourse by proposing actionable frameworks for service innovation. These frameworks position public libraries as pivotal players in advancing cultural education and addressing contemporary societal needs through interdisciplinary collaboration and resource optimization. [Method/Process] Adopting a cultural tourism integration approach, this study employs a multi-method approach that combines a literature review, a comparative case analysis, and an empirical survey to examine the current state of, and challenges to, public library study tour services. The research establishes a theoretical foundation by drawing on policy documents, industry reports, and academic literature to establish a theoretical foundation, and identifies best practices and gaps by analyzing representative domestic and international library cases. Domestic cases span provincial, municipal, and county-level libraries, covering diverse regions and service models, such as digital innovation and regional cultural integration. International cases include libraries in North America, Europe, and Asia, highlighting technological applications and cross-sector collaboration. The comparative analysis focuses on cooperation models, course design, and service mechanisms, supported by empirical data from user feedback and activity records. This approach ensures a robust understanding of practical challenges and opportunities, grounded in both national policy contexts and global experiences. [Results/Conclusions] The study identified key challenges in public library study tour services, including insufficient resource integration, severe homogenization of service formats, lack of evaluation systems, and weak cross-sector collaboration mechanisms. To address these issues, five strategic pathways have been proposed: establishing multi-party collaboration frameworks involving government, schools, and social organizations; creating expert talent pools to enhance service professionalism; developing standardized service guidelines to improve consistency and quality; deepening thematic content by leveraging library collections; and implementing comprehensive feedback and evaluation systems to ensure continuous improvement. These strategies enable public libraries to create distinctive, culturally rich study tour programs that align with regional identities and educational goals. However, there are limitations, such as the scalability of resource-intensive models and the need for ongoing funding. Future research could explore the integration of digital technology, such as AI-driven evaluation systems, and cross-regional collaboration to enhance scalability and inclusivity of public libraries, thereby advancing their role in cultural tourism integration and social education.

  • XUE Qian, ZHAO Hong, REN Fubing
    Journal of library and information science in agriculture. 2025, 37(10): 78-95. https://doi.org/10.13998/j.cnki.issn1002-1248.25-0368

    [Purpose/Significance] Science and technology have emerged as pivotal domains of competition between China and the United States. This article provides a quantitative analysis of US technology think tanks reports on the electronic information research and industry, with a focus on the evolution of themes and topics over the past decade. This analysis not only reflects their technological priorities but also maps their analytical focus on China, providing decision-making support for China's think tanks development and strategic response. [Method/Process] Based on the "2020 Global Go to Think Tank Index Report" released by the Think Tanks and Civil Societies Program (TTCSP) at the University of Pennsylvania, considering factors such as think tank authority, research topic relevance, and research continuity, we collected a total of 1 360 reports on the electronic information research and industry published between 2015 and 2024 by 8 leading US technology think tanks. Topic analysis was conducted with BERTopic, a topic modeling tool based on Transformer embeddings. The methodology involved several key steps. First, text cleaning was performed using NLTK tools; then, the all-MiniLM-L6-v2 model was employed to generate high-dimensional document embedding vectors. Subsequently, dimensionality reduction was achieved through the UMAP algorithm, followed by density clustering using the HDBSCAN algorithm. Finally, topic words were extracted based on the c-TF-IDF algorithm. [Results/Conclusions] The research identified 31 distinct research themes, of which 6 were directly related to China, specifically: global semiconductor industry competition, Sino-US digital policies and cloud computing competition, 5G network and technology competition, Chinese AI investment, Sino-US science and innovation policies, and Sino-US military technology competition. These 31 research themes were hierarchically clustered using HDBSCAN and could be categorized into 11 major research directions. The US technology think tanks persistently focused on 11 major research directions, which were largely concentrated on key areas of electronic information research and industry, such as semiconductors and microelectronics, artificial intelligence, wireless communication, quantum information technology, network security, and big data. The evolutionary trends across these research directions were generally consistent, with military technology and network security receiving the highest level of attention. The attention attached to China has undergone a significant strategic shift over the years, with drastic increase in semiconductor export control, AI technology and Sino-US digital competition. Based on the identified key themes and topic words, it is highly recommended to establish an evolutionary mapping of China-related topics and to develop a dynamic monitoring and early warning mechanism for technology issues concerning China. Future research could incorporate larger-scale corpus resources and more advanced large language models to continuously optimize topic modeling effectiveness.

  • LI Xinxin, MA Yumeng, JU Zihan, WANG Jing
    Journal of library and information science in agriculture. 2025, 37(10): 53-66. https://doi.org/10.13998/j.cnki.issn1002-1248.25-0396

    [Purpose/Significance] In recent years, the rapid rise of large language model technology has shown significant advantages in understanding semantic context and capturing multidimensional sentiment tendencies. This study explores an aspect-level sentiment analysis method for science and technology policy comments based on large language models, aiming to uncover latent knowledge within these texts and provide data support for evaluating the effectiveness and subsequent optimization of policies. [Method/Process] Taking the electric vehicle industry as an example, a burgeoning sector vital to achieving the "dual carbon" goals and promoting green low-carbon development, this study proposed a policy satisfaction evaluation model. The model uses large language models for fine-grained aspect-level sentiment analysis of policy comment texts. The process includes the following steps: 1) Data collection and preprocessing: Comments related to electric vehicle policies were collected from the "Interactive Topics" section of the "Autohome" website using Python. Deep learning techniques were applied to set rules for the comment texts and automatically add punctuation marks to Chinese texts for data pre-processing. 2) Aspect word extraction: The steps include text tokenization, determining a candidate aspect word set, expanding the aspect word set, and clustering aspect words. A total of 3 405 aspect words were extracted from 35 000 comments, forming six clusters: infrastructure construction, vehicle performance configuration, national policies, technological development, automotive safety, and automotive sales market. Aspect-level sentences were extracted using aspect words and punctuation information, with a subset of sentences manually labeled to build training and validation corpora, resulting in 14 911 aspect-level sentences. 3) Sentiment tendency recognition model training: A prompt template for aspect-level sentiment classification tasks was designed, and the LoRA method was used to fine-tune the large language model with the manually labeled training set. The model's performance was evaluated using a validation set, resulting in the classification of comments on electric vehicle policies into positive, neutral, and negative sentiments. 4) Comparative experiment: The fine-tuned large model was compared with the mainstream sentiment classification model, BERT, to assess the performance of different models in aspect-level sentiment classification tasks. [Results/Conclusions] The results show that compared to the BERT model, the proposed method outperformed other methods in multiple metrics, including accuracy, recall, and F1 score, with improvements of 11.49%, 12.43% and 11.43%, respectively. Overall, public attention is higher towards vehicle performance configuration and automotive sales market, while infrastructure construction receives the lowest attention. The overall public satisfaction with electric vehicles is relatively low, with negative comments outweighing positive comments across all aspects, consistent with the "negative bias" theory in social psychology. Satisfaction issues are particularly prominent in the areas of automotive safety and infrastructure construction. Finally, policy recommendations have been proposed to optimize electric vehicle subsidy policies, strengthen policy promotion, improve infrastructure construction, and enhance after-sales service support systems.

  • QIAN Li, WANG Qianying, LIU Yi, ZHANG Yuanzhe, CHANG Zhijun
    Journal of library and information science in agriculture. 2025, 37(5): 5-14. https://doi.org/10.13998/j.cnki.issn1002-1248.25-0386

    [Purpose/Significance] Currently, large language models (LLMs) and agents have emerged as core technical paradigms in artificial intelligence, with their integration into scientific research scenarios holding profound significance for transforming research paradigms. Traditional scientific research is facing an increasing number of challenges such as inefficient literature searches, the processing of massive amounts of data, repetitive experimental tasks, and barriers to collaborative innovation. Agents, empowered by LLMs, offer a promising solution to these bottlenecks by enabling intelligent automation and adaptive collaboration across research workflows. Beyond basic task assistance, they play a pivotal role in facilitating knowledge fusion, accelerating breakthroughs in frontier areas, and reshaping traditional research models. This study aims to clarify the core techniques and applications of agents in scientific research, highlighting their transition from auxiliary tools to integral innovation partners, which is crucial for accelerating knowledge discovery, enhancing research efficiency, and promoting the shift toward intelligent and collaborative research models. [Method/Process] Employing an objective, inductive approach, this study thoroughly explains the core technical modules of agents including planning, perception, action, and memory, as well as the operational mechanisms of multi-agent collaboration. It also integrates an analysis of agent applications throughout the entire scientific research lifecycle. This analysis covers key scenarios including literature review and idea formulation, experimental planning and design, data processing and execution, result analysis and knowledge discovery, and research report composition. By analyzing the application value and existing limitations of agents, this study proposes prospects and recommendations for the application and development of agents in scientific research scenarios. [Results/Conclusions The findings reveal that LLM-driven agents are evolving from basic task executors to active participants in scientific discovery, demonstrating significant transformative potential throughout the entire research workflow. They facilitate more efficient information processing, smarter experimental design, and deeper knowledge integration, thereby redefining traditional research patterns. However, several challenges persist, including limitations in long-range reasoning capabilities, and underdeveloped ecosystem support. There are also ethical and security concerns, such as data privacy and academic integrity. To address these, future efforts should focus on strengthening intelligent computing infrastructure for scientific data, deepening collaborative development of domain-specific agents, establishing a unified open collaboration framework with standardized interfaces, and building "human-in-the-loop" hybrid systems and multiple evaluation mechanisms. These measures will enable agents to become core partners in scientific innovation, driving the transition of research paradigms toward greater intelligence and collaboration.

  • JIANG Yumeng
    Journal of library and information science in agriculture. 2025, 37(7): 19-34. https://doi.org/10.13998/j.cnki.issn1002-1248.25-0357

    [Purpose/Significance] The rapid advancement of artificial intelligence (AI) has fundamentally transformed academic research and information services. This makes AI literacy education a critical part of the strategy for academic libraries. As AI technologies become integrated into various aspects of scholarly activities, including literature searches, data analysis, academic writing and publishing, libraries must expand their traditional information literacy programs to include comprehensive AI competencies. This study focuses on analyzing AI literacy education practices in Nordic academic libraries, which are recognized for their progressive approaches to digital education and technology integration. By examining these international exemplars, the research aims to provide valuable references for academic libraries in China. The findings will help libraries develop systematic approaches to equip faculty and students with both technical AI skills and critical understanding of AI's ethical implications, ultimately supporting the cultivation of future-ready talents in the digital era. [Method/Process] This research employed a web-based survey methodology to investigate AI literacy education programs in 23 academic libraries across Nordic countries (Denmark, Finland, Norway, and Sweden). The study systematically analyzed four key dimensions of these programs: educational stakeholders (including libraries, faculty, and IT departments), target audiences (undergraduates, graduate students, researchers, and faculty), educational content (covering both technical skills and ethical considerations), and instructional formats (such as workshops, courses, and online modules). The selection of Nordic libraries as case studies was based on their established reputation in digital literacy education and early adoption of AI-related services. Data collection focused on publicly available information about each library's AI education initiatives. The analysis particularly emphasized how these libraries integrated AI literacy within their existing information literacy frameworks while addressing the specific needs of different user groups. [Results/Conclusions] The investigation revealed several effective practices in AI literacy education. First, successful programs typically involved collaboration among multiple stakeholders, with libraries working closely with academic departments, IT services, and sometimes external partners to develop comprehensive curricula. Second, the content was carefully designed to address different competency levels, from basic AI awareness for undergraduates to advanced applications for researchers. Third, most programs balanced technical instruction with critical discussions about ethical challenges such as algorithmic bias and data privacy. Fourth, diverse delivery methods were employed, including hands-on workshops, credit-bearing courses, and self-paced online modules, allowing for flexibility in learning. For Chinese academic libraries seeking to enhance their AI literacy offerings, these findings suggest several practical recommendations: establishing cross-departmental collaboration mechanisms to pool expertise and resources; developing tiered educational content that caters to users with varying needs and backgrounds; incorporating both technical training and ethical discussions into the curriculum; and adopting flexible teaching formats to maximize accessibility. Future development should focus on creating localized AI literacy frameworks that consider China's unique educational context and technological landscape, while maintaining international perspectives through continued dialogue with global peers.

  • LIGuihua
    Journal of library and information science in agriculture. 2025, 37(8): 40-49. https://doi.org/10.13998/j.cnki.issn1002-1248.25-0472

    [Purpose/Significance] The social application of technologies, such as generative artificial intelligence (AI), presents challenges for adolescents when it comes to deep reading. In response, China has promoted the Youth Reading Initiative and issued the "Notice on Further Implementing the National Youth Student Reading Action", which outlines five key projects and provides a clear roadmap for action. To thoroughly implement and effectively achieve the goals, tt is essential to clarify the specific path choices and their inherent rationality. vMethod/Process] This paper reviews China's decade-long initiatives to develop a youth reading ecosystem. It demonstrates that the nation has established a robust foundation to support "holistic reading" initiatives and prioritizes creating such environments as its strategic focus. The "Notice on Further Implementing the National Youth Student Reading Initiative" first mentioned both the concepts of "holistic reading" and "deep reading" simultaneously. Thus, in this paper, we first clarified four characteristics of holistic reading, and then analyzed the relationship between "holistic reading" and "deep reading" based on discussions about the real-world impact of the AI era on youth reading. We finally elucidated the logical and practical foundations for the formation of current youth reading promotion pathways in China. [Results/Conclusions] China's recent policies for youth reading initiatives demonstrate the nation's commitment to a "holistic reading" approach that encourages "deep reading" among adolescents. Emerging from the interplay between contemporary educational philosophies and evolving educational environments, this strategic choice signifies a return to the fundamental essence of reading. Cultivating reading can comprehensively enhance teenagers independent thinking, social responsibility, innovative spirit, and practical abilities. However, the development of deep reading skills among today's youth faces unprecedented challenges due to significant changes in media environments and knowledge acquisition methods. Therefore, in this era where technological environments profoundly reshape learning conditions, only by embracing the concept of "holistic reading" can teenagers develop the internal motivation needed to counteract the effects of the current information environment and cultivate their perseverance in deep reading. The progression from "holistic reading" to "deep reading" represents a significant shift from reading habits to reading competence. First, broadening one's reading scope lays the foundation for deep reading. Second, access to quality reading materials ensures effective outcomes of deep reading. Third, peer motivation cultivates the drive for deep reading. Fourth, promoting specialized reading creates societal momentum that propels deeper engagement. Finally, the paper posits that achieving this transformation necessitates coordinated efforts spanning various dimensions, including stakeholder engagement, goal-setting, and resource allocation.

  • CHENNan
    Journal of library and information science in agriculture. 2025, 37(12): 64-80. https://doi.org/10.13998/j.cnki.issn1002-1248.25-0427

    [Purpose/Significance] With the rapid development of technologies such as artificial intelligence, big data, and cloud computing, digital-intelligent technologies are profoundly revolutionizing the service models and management frameworks of public libraries. This study is based on the development background of the digital-intelligent era under the 15th Five-Year Plan. It investigates the smart library services of theNational Library, the Hong Kong Central Library, the Macao Central Library, libraries in theTaiwan region, and 31 provincial-level public libraries across China. The analysis focuses on the current research progress in smart library services provided by public libraries, examining both service content and methods. [Method/Process] This research employed a comparative analysis method, comparing the smart library services of 31 provincial-level public libraries in China with those in Hong Kong, Macao, and the Taiwan region to identify regional differences and development gaps. The investigation reveals that the development of smart library services in public libraries in China exhibits significant regional imbalances. Public libraries in economically developed regions demonstrate a significantly higher level of smart library services compared to those in less developed areas. / [ResultsConclusions] Based on the findings, this study proposes development strategies for smart library services in public libraries within the digital-intelligent environment. These strategies include building an intelligent technology management system, establishing tiered smart service standards, cultivating a multidisciplinary team of smart librarians, creating an inclusive smart service system, developing an integrated smart resource platform, designing blended physical-virtual smart service spaces, and fostering collaborative innovation in smart service alliances. The challenges faced and the experiences gained by public libraries during the "14th Five-Year Plan" period provide critical insights for the formulation of the "15th Five-Year Plan," while also representing core issues that must be acknowledged and addressed in the journey of the "15th Five-Year Plan." This necessitates the development of scientific and effective strategies by public libraries, which is also a key task of the "15th Five-Year Plan." As a pivotal phase for the innovative development of public libraries, the "15th Five-Year Plan" period should actively implement national policies, with each library formulating development strategies and specific measures for smart library services based on the needs of public cultural development and their own practical circumstances. Grounded in the context of the "15th Five-Year Plan" and building upon the current state of smart library services in provincial-level public libraries during the "14th Five-Year Plan" period, this paper proposes strategies for smart library services in public libraries during the "15th Five-Year Plan" period in the digital-intelligent era, with the aim of contributing to the promotion and development of smart library services in public libraries nationwide.

  • DONG Ke, SONG Yuchen, WU Jiachun
    Journal of library and information science in agriculture. 2025, 37(7): 4-18. https://doi.org/10.13998/j.cnki.issn1002-1248.25-0374

    [Purpose/Significance] The rapid development of artificial intelligence (AI) technology has reshaped the demand for data governance that is compliant, comprehensive, and refined. The European Union (EU) has proactively established a benchmark framework for AI data governance through targeted policy measures. However, there is a lack of systematic analysis on the policy layout and governance characteristics of AI data governance in the EU, both domestically and internationally. This paper focuses on the AI data governance policies in the EU, aiming to reveal the development process, policy layout, and governance characteristics of AI data governance in the region, providing valuable insights and references for advancing the global paradigm of AI data governance. [Method/Process] This paper systematically collects core AI data governance policy documents from 10 EU member states and the United Kingdom through multiple channels. By manually reviewing and selecting policy units related to "AI data governance," the paper traces the development process and uses a three-dimensional analytical framework - governance goals, governance bodies, and governance tools - to reveal the policy layout and governance characteristics of AI data governance in the EU. [Results/Conclusions] The study found that AI data governance in the EU has transitioned from soft law guidance to hard law regulation, gradually establishing three key governance goals: data ethics protection, data security defense, and data value release. Through the establishment of a multi-level legislative system and a coordinated execution framework, the EU focuses on regulatory constraints, procedural norms, AI system element support, and data ecosystem construction, demonstrating comprehensive governance capabilities. First, the EU has constructed a consensus framework for data governance through unified norms, centrally coordinating the diverse needs of member states during policy implementation, ensuring high consistency of governance rules across the EU. Second, the EU's policy design strikes a balance between rule uniformity and national autonomy, allowing member states to adjust policies flexibly according to their unique data cultures and industrial structures, fostering better localized governance. Third, the EU's governance model achieves a dynamic balance between "strong regulation" and "promoting development," ensuring the protection of citizens' rights through stringent ethical and risk prevention measures, while fostering innovation by releasing data value and driving AI industry growth. This paper provides a systematic analysis of the layout and characteristics of AI data governance in the EU. Future research could compare the EU framework with AI data governance policies in other major economies, such as the United States and China, to identify their respective strengths and weaknesses.

  • ZHANG Li, WANG Bo, JING Shui
    Journal of library and information science in agriculture. 2025, 37(5): 58-71. https://doi.org/10.13998/j.cnki.issn1002-1248.25-0297

    [Purpose/Significance] As generative artificial intelligence (AI) transforms library services, existing evaluation systems fail to capture dynamic characteristics of AI-driven resource discovery. This study develops a dynamic evaluation framework for public libraries' AI-enhanced services, addressing the gap between technological innovation and service assessment. [Method/Process] The research employed a mixed-methods approach to develop and verify a multi-dimensional evaluation framework based on Knowledge Organization Systems (KOS) theory. The framework comprises five primary dimensions: physical environment, technical architecture, content organization, user interaction, and innovation capability-operationalized through fifteen secondary indicators. Each indicator was carefully designed to capture AI-specific capabilities, including cognitive guidance efficiency, multimodal interaction precision, semantic network depth, and generation-enhanced utilization rate. A sophisticated hybrid weighting methodology was implemented, integrating subjective and objective approaches. For subjective weights, the Analytic Hierarchy Process was employed with 30 domain experts constructing pairwise comparison matrices using standardized scaling methods. Geometric mean aggregation was applied to synthesize individual judgments, with consistency ratios maintained below the threshold to ensure logical coherence. For objective weights, the entropy method analyzed actual evaluation data variance, with greater variance indicating higher discriminatory power. The final weights were derived through multiplicative synthesis combining both approaches. The empirical validation study involved collecting 492 valid questionnaires from 14 strategically selected public libraries representing different stages of AI implementation between September and November 2024: one municipal library with comprehensive AI deployment, 11 district libraries with partial implementation, and 2 county libraries in early adoption phases. The questionnaire utilized a five-point Likert scale to assess real-time service performance across multiple scenarios. Statistical analysis employed fuzzy comprehensive evaluation to handle uncertainty in subjective assessments, structural equation modeling to validate construct relationships, and latent class analysis to identify distinct user interaction patterns. The framework demonstrated high reliability with Cronbach's alpha reaching 0.845 and strong construct validity with KMO value of 0.873. [Results/Conclusions] Content organization emerged as the most critical dimension with a combined weight of 0.302 2, while semantic network depth, cognitive guidance efficiency, and cross-media consistency ranked as top secondary indicators with weights of 0.090 3, 0.086 1, and 0.084 7 respectively. Performance evaluation revealed content organization scoring 74.873 points versus user interaction at 68.040 points, highlighting the gap between technical capabilities and user experience. Significant differences existed across library levels, with municipal libraries outperforming county libraries by over one point in technical architecture and semantic network depth. Four distinct user patterns emerged: technology-oriented, content-immersive, efficiency-focused, and assistance-dependent. Each requires a tailored service approach. The study proposes the following optimization strategies: multimodal interaction frameworks, adaptive user profiling, hierarchical collaboration mechanisms, and knowledge graph-based content reorganization.

  • GAO Dan, CUI Bin
    Journal of library and information science in agriculture. 2025, 37(7): 61-72. https://doi.org/10.13998/j.cnki.issn1002-1248.25-0285

    [Purpose/Significance] As digital technology continues to reshape the preservation and utilization of cultural heritage, the study of the value co-creation of cultural heritage data resource has gained increasing importance. The growing significance of cultural heritage data, coupled with advancements in digital tools such as big data, artificial intelligence, and virtual reality, require a deeper understanding of the collaborative processes that create value. This research focuses on the value co-creation mechanism of cultural heritage data resources, aiming to offer new perspectives on how diverse stakeholders, including cultural heritage institutions, digital technology providers, and the public, interact dynamically across different stages of data resource management. By proposing a three-dimensional analysis framework based on "stages-subjects-scenarios," this study not only enhances the understanding of the co-creation process, but also contributes to the academic field by exploring the role of different stakeholders in different contexts. The innovation lies in the application of this framework to analyze the specific mechanisms of value co-creation, highlighting the different involvement levels of stakeholders in various stages of data management and usage. The study provides practical implications for improving the management and utilization of cultural heritage data resources, particularly in the context of fostering interdisciplinary collaboration and community engagement. [Method/Process] The study takes an integrated approach, combining case analysis, stakeholder theory, and qualitative research methods, with a particular focus on expert interviews and case study reviews. Through a systematic review of both domestic and international examples, the research explores how different phases of data management - such as data collection, integration, sharing, and application - unfold in practice. The case studies were selected using a multi-source approach, which includes authoritative recommendations, literature reviews, and online surveys to ensure a diverse set of representative projects. We analyzed each case to identify the key stages and stakeholders, and how they interact within specific application scenarios. The theoretical foundation is grounded in stakeholder theory and value co-creation frameworks, while empirical evidence was drawn from ongoing projects in the digital humanities and cultural heritage fields. Using this combination of theoretical and empirical sources, the research developed a thorough understanding of how value co-creation mechanisms evolve and manifest in the context of cultural heritage data management contexts. [Results/Conclusions] The research reveals that the value co-creation of cultural heritage data resources involves multiple stakeholders, each contributing differently at various stages of the process. The identified stages include data collection, integration, sharing, application, and dissemination, each with distinct stakeholder involvement. Key stakeholders include cultural heritage institutions, digital technology providers, content creators, government bodies, and the public, each playing a critical role at different phases. For instance, cultural heritage institutions are central during the data collection and preservation stages, while content creators and developers take a more prominent role during the application and innovation stages. The research also identifies that stakeholder participation varies across different application scenarios, such as digital exhibitions, educational projects, and creative industries. The study concludes that achieving effective value co-creation requires a flexible, collaborative approach, tailored to the specific needs of each stage and scenario. Recommendations for future practice include improving collaboration between stakeholders, encouraging public participation, and establishing clearer frameworks for data governance and intellectual property rights.

  • CHENGFan, GULiping
    Journal of library and information science in agriculture. 2025, 37(8): 78-91. https://doi.org/10.13998/j.cnki.issn1002-1248.25-0315

    [Purpose/Significance] This paper focuses on the development process and service mechanisms of Japan's Research Data Cloud (RDC) system, a core national infrastructure coordinated by the Research Center for Open Science and Data Platform (RCOS). Against the backdrop of growing global attention to open science, RDC presents a practical model for integrating data management, open sharing, publication, search, and preservation throughout the research lifecycle. The paper highlights the unique collaborative model of RDC, which is characterized by a small team driving large networks. Compared to prior literature that often emphasizes technical architectures or isolated institutional efforts, the paper situates RDC within Japan's broader open science strategy, offering both theoretical and practical insights. It explores how RDC contributes to advancing the FAIR data principles, supporting cross-sector innovation, and strengthening national science and technology governance. The analysis also offers strategic lessons for China in building a sustainable and service-oriented research data system. [Method/Process] Using a qualitative case study approach, the paper draws on a combination of primary and secondary sources, including official reports, project documentation, and academic literature, and publicly available platform data related to the RDC initiative. It systematically analyzes the organizational structure and collaborative mechanisms of RDC, focusing on the institutional roles, platform components (GakuNin RDM, WEKO3, CiNii Research), and key technological innovations such as data governance, data provenance, secure computing, and trusted storage. In particular, it analyzes how RCOS functions as a neutral coordinator that bridges stakeholders across ministries, universities, and research organizations, and how it plays a role in translating policy mandates into technical services, integrating institutional workflows, and fostering community participation in the open science ecosystem. [Results/Conclusions] Despite constrained resources, RDC has developed a comprehensive research data ecosystem that serves researchers, data managers, librarians, industry, and the public. Japan's experience demonstrates that emphasizing interoperability, governance coordination, and capacity building, especially through small-scale research teams and nationwide collaborative networks, can effectively support the development of robust research infrastructure. The paper concludes by proposing several recommendations for China: the creation of independent coordination agencies to avoid fragmented development, the establishment of standardized service frameworks to enhance interoperability, and the implementation of tiered training programs to improve data literacy and management capacity across disciplines. Future research should further explore comparative institutional models, examine the long-term sustainability of open science ecosystems under different governance conditions, and investigate the cultural, legal, and technical dimensions that shape localized approaches to research data governance.

  • CHANGHao, XUTaotao, LIFeng
    Journal of library and information science in agriculture. 2025, 37(8): 61-77. https://doi.org/10.13998/j.cnki.issn1002-1248.25-0365

    [Purpose/Significance] In cross-domain natural language processing (NLP) tasks, deep learning models often exhibit performance variations due to texts with distinct domain characteristics, leading to a decline in model generalization capabilities. Text complexity stands out as one of the most explanatory factors influencing model generalization. [Method/Process] This paper presents two innovative contributions. First, a multi-dimensional text complexity calculation framework grounded in systemic functional linguistics theory was constructed. This framework employs a hierarchical quantification approach: at the lexical level, it dynamically identified four types of non-standard expressions - abbreviations, emoticons, internet buzzwords, and alphanumeric mixed words - and calculated a normative score using a non-linear formula. At the sentence level, an innovative inverse fusion enhancement method (IFEM) was proposed, integrating punctuation anomaly density (weight 0.1), colloquial word ratio (weight 0.4), semantic ambiguity (weight 0.2), and sentence length features (weight 0.3), and generating a structural score through modeling of feature synergy and suppression effects along with an adaptive weighting mechanism. Finally, at the corpus level, a weighted fusion output the global corpus complexity assessment. Experimental results demonstrated that this framework successfully quantifies intrinsic differences between domain texts. For instance, the measured complexity of the waimai_10k dataset reached 0.703, significantly higher than the 0.552 of the ChnSentiCorp_htl_all dataset, and it accurately captured complexity changes even after internal text reduction and substitution operations. Second, a knowledge base-enhanced dynamic adaptive CNN-BiLSTM model was designed. This model implemented the following innovative mechanisms: 1) The knowledge base adopts a dual mapping architecture of text-label and vector-label, supporting historical experience knowledge loading and real-time error recording; 2) Feature weights were adjusted based on the knowledge base content, such as strengthening positive semantic representations or weakening negative expressions. The model architecture integrated multi-scale CNN convolutional kernels for local feature extraction, bidirectional long short-term memory networks for capturing long-distance dependencies, and an attention mechanism to focus on key information. To validate the effectiveness of the proposed methods, experiments were conducted on four Chinese datasets. [Results/Conclusions] The results indicate that the complexity calculation framework exhibits strong robustness, with complexity fluctuations below 3.3% after a 20% sample reduction, and a maximum complexity increase of 13.8% upon short text data injection. Moreover, the framework effectively quantifies and differentiates text complexities, as evidenced by the 0.703 complexity of the waimai_10k dataset compared to the 0.552 of the ChnSentiCorp_htl_all dataset. Additionally, the proposed model demonstrated optimal performance across both the most standardized ChnSentiCorp_htl_all dataset and the most challenging waimai_10k dataset (achieving accuracies of 0.923 8 and 0.943 4, respectively), significantly outperforming Transformer and various large language models such as deepseek-v3 and qwen-plus.

  • YE Zhifei, WU Zhenxin, LI Hanyu, WANG Ying
    Journal of library and information science in agriculture. 2025, 37(10): 67-77. https://doi.org/10.13998/j.cnki.issn1002-1248.25-0364

    [Purpose/Significance] The rapid advancement of digital infrastructure has precipitated a fundamental transformation in scholarly communication, characterized by an increasing reliance on online platforms. Preprint exchange, as a cornerstone of open science, offers researchers opportunities for immediate dissemination and collaborative engagement. However, the absence of rigorous peer review raises persistent concerns regarding research ethics, data integrity, and the reliability of scholarly outputs, which can undermine public confidence in preprint platforms. Addressing these challenges is essential not only for maintaining the integrity of academic discourse but also for fostering a transparent and trustworthy open science ecosystem. This research contributes to the existing scholarship by systematically examining the trust framework of preprint platforms, positioning itself at the intersection of library and information science and scholarly communication studies. In contrast to previous investigations that have focused predominantly on dissemination efficiency or platform functionality, this study emphasizes the structural dimensions of trustworthiness. It presents an innovative analytical framework that strengthens the theoretical foundations of academic information trust and provides practical strategies for enhancing the governance and legitimacy of preprint platforms. [Method/Process] To ensure both theoretical rigor and empirical depth, first, a comprehensive literature review was conducted to identify potential trust-related vulnerabilities in preprint platforms and to systematically delineate their credibility challenges. This review identified five critical factors influencing the credibility of preprint platforms: academic conflicts of interest, platform reliability, heterogeneous manuscript quality, information overload, and insufficient academic recognition. Drawing upon the DeLone & McLean (D&M) Information Systems Success Model and aligning with the ISO 16363 standard for trustworthy digital repositories, the study analyzed the structural components of trustworthiness through the dimensions of system quality, information quality, and service quality. Subsequently, in-depth case studies of prominent platforms, including arXiv and ChinaXiv, were undertaken to examine their governance architectures, operational methodologies, and practical implementations. This process culminated in evidence-based recommendations for enhancing platform trustworthiness. This integrated methodological framework not only synthesizes theoretical insights with empirical evidence but also ensures the scientific rigor, reliability, and practical applicability of the proposed trust model. [Results/Conclusions] Based on these findings, a three-dimensional trust framework was developed, encompassing system trustworthiness, information trustworthiness, and service trustworthiness. This framework transcends traditional quality control paradigms and offers novel perspectives for the standardized development of preprint platforms. The research further articulates pathways for establishing trustworthiness across three levels: 1) system trustworthiness, adhering to FAIR principles and implementing long-term preservation strategies to provide a stable institutional foundation; 2) information trustworthiness, establishing a comprehensive quality governance continuum that incorporates "pre-screening, dynamic identification, and post-peer review" mechanisms; and 3) service trustworthiness, delivering professional preprint services through collaborative governance models and journal coordination frameworks.While this framework provides a comprehensive analytical perspective, certain limitations should be acknowledged. This study's primary reliance on qualitative methods necessitates broader empirical validation. Furthermore, its focus was on platform functionalities rather than user perceptions. Consequently, future research can adopt a mixed-methods approach, incorporate user perception theories, and establish quantitative metrics for evaluating platform trustworthiness.

  • MA Haiqun, MAN Zhenliang
    Journal of library and information science in agriculture. 2025, 37(6): 4-19. https://doi.org/10.13998/j.cnki.issn1002-1248.25-0271

    [Purpose/Significance] In the context of the booming development of the digital economy, public data, as a fundamental strategic resource of the country, plays an important role in promoting high-quality economic development, enhancing government governance capabilities, and stimulating social innovation vitality through its development and utilization. The "Opinions on Accelerating the Development and Utilization of Public Data Resources" (hereinafter referred to as the Opinions), is the first top-level design document for the development and utilization of public data resources at the central level. It directly affects the success or failure of the market-oriented allocation reform of data elements in terms of policy effectiveness. Therefore, a comprehensive and systematic evaluation of the Opinions not only helps identify the strengths and weaknesses of policy design, but also provides scientific basis for the continuous optimization of policies, thereby ensuring the efficient development and utilization of public data resources and providing strong support for high-quality economic and social development. [Method/Process] This study introduces an innovative evaluation framework called S-CAD (Consistency Dependency Sufficiency) evaluation framework, which analyzes policy texts in depth through three dimensions: consistency, sufficiency, and dependency. 1) Consistency analysis focuses on the logical coherence between policy positions, goals, means, and expected outcomes. 2) Necessary and sufficient analysis evaluates the necessity and adequacy of policy measures for achieving goals. 3) Dependency analysis identifies key chains and stakeholders' interests and demands in policy implementation to evaluate the feasibility and acceptability of the policy. In terms of specific applications, this study first clarifies the dominant viewpoint of the policy (policy makers) and related viewpoints of policy implementers, participants, influencers. Subsequently, four typical elements of stance, objectives, means, and expected outcomes were identified from the policy text, and an analysis chart of the content of the Opinion was constructed. Inviting scholars from the field of information resource management to participate ensured the evaluation's scientificity and accuracy. Consistency analysis shows that the policy stance, objectives, means, and expected outcomes of the Opinion are logically closely related. The objectives revolve around accelerating the development and utilization of public data resources, and the means and objectives support each other. The expected outcomes are highly consistent with the means, reflecting the systematic and rational design of the policy. The analysis of necessity and sufficiency shows that policy measures play an important role in achieving goals, such as deepening the reform of data element allocation and regulating the authorized operation of public data, all of which provide strong guarantees for achieving policy goals. A dependency analysis reveals potential challenges in policy implementation. These challenges include difficulties in coordinating departmental interests, unclear details of data authorization operations, insufficient data quality and availability, and public concerns about privacy protection. In response to these issues, this study proposes suggestions such as strengthening departmental collaboration, clarifying data authorization operation processes, improving data quality and availability, strengthening data security management and privacy protection publicity. [Results/Conclusions] The issuance of the Opinions provides an important policy framework and guidance for the development and utilization of public data resources, but there is still room for improvement in areas such as departmental collaboration and privacy protection. To enhance public trust and support for policies, future, policy measures should be further refined, data authorization operation mechanisms should be optimized, data quality and utilization efficiency should be improved, and data security management and privacy protection should be strengthened. By continuously monitoring the development trends of the data industry and adjusting policies in a timely manner, we ensure the efficient and orderly promotion of the development and utilization of public data resources. This approach injects strong impetus into the high-quality development of the economy and society.

  • LIU Hao, JIN Xiaohe
    Journal of library and information science in agriculture. 2025, 37(7): 73-90. https://doi.org/10.13998/j.cnki.issn1002-1248.25-0372

    [Purpose/Significance] Studying whether the development of the digital economy can boost rural household consumption is related to expanding rural consumption potential in the digital age. This is significant for leading the country's overall economic development and overcoming obstacles that restrict the growth of domestic demand. The research topic has been expanded to include research related to the digital economy. The contribution of this paper lies in the following aspects. First, few scholars currently consider refining the types of consumption for the research between the two. This paper starts from the heterogeneous consumption structure to explore the differences in the impact of the "broadband rural" policy on the consumption structure of rural households creating diversified consumption needs and experiences. This promotes new consumption, and further taps the consumption potential of rural households. Second, previous scholars primarily focused on macro-city data, while this paper uses micro-level data from the China Family Panel Studies (CFPS) from 2010 to 2022 to extend the identification period of the effects of policy dynamics. Based on the level of farmers, this paper examines the differential impact of the digital economy on the individual consumption behavior of farmers. Third, it introduces family endowment into the influence mechanism of digital economy on farmers' household consumption, discusses the adjustment mechanism of endowment difference in policy influence, and supplements the research perspective of previous scholars. [Method/Process] Based on the data of China Family Panel Studies (CFPS) from 2010 to 2022, this paper constructs seven periods of unbalanced panel data, takes the "broadband rural" policy as a quasi-natural experiment, adopts the methods of difference-in-differences, triple difference method and PSM-DID, and combines Keynesian absolute income hypothesis, information asymmetry theory and precautionary savings theory to evaluate the impact of digital economy on farmers' household consumption. [Results/Conclusions] As a result, the digital economy has significantly promoted the consumption of rural households, but it is not significant in the impact of enjoyment consumption. Combined with mechanism analysis and heterogeneity analysis, family endowment has a significant moderating effect, and the impact of digital economy has group differences. Based on this, this paper puts forward some countermeasures and suggestions to promote the dividend sinking of digital economy development, focus on the support of heterogeneous groups, and reasonably advocate new consumption. It can be seen that the impact of digital economy on the consumption of peasant households still needs to be further explored, which is of great significance to realize the domestic cycle and international double cycle. However, this is difficult to achieve due to data limitations and the need for long-term tracking. Therefore, in the future, the effect analysis of the "broadband village" policy will be extended to analyze its long-term impact on the consumption of peasant households.

  • CHEN Yuanyuan, FU Bin, GAO Yuan, QIAO Junwei
    Journal of library and information science in agriculture. 2025, 37(6): 55-69. https://doi.org/10.13998/j.cnki.issn1002-1248.25-0275

    [Purpose/Significance] With the rapid advancement of global science and technology, emerging technologies are constantly evolving, placing higher demands on national strategic planning and resource allocation. Artificial intelligence (AI), as a core driver of the current technological revolution, requires close attention to its internal technical topic evolution to anticipate disruptive changes and guide the direction innovation. Although existing research primarily focuses on identifying technical topics through bibliometric or patent analysis, there is still insufficient quantitative forecasting of their future development. To address this gap, this study proposes an integrated analytical framework that combines BERTopic-based topic modeling with an IWOA-optimized BiLSTM neural network, achieving a unified approach to both topic identification and trend forecasting. Unlike traditional LDA models or expert-based subjective judgment, this method demonstrates significant advancements in semantic representation, model optimization, and prediction accuracy. It expands the theoretical boundaries of emerging technology forecasting and offers valuable quantitative support for science and technology policy-making. [Method/Process] This study utilized 22,243 AI-related patent records collected from 2015 to 2024. BERTopic was applied to extract representative technology topics from patent abstracts. A multi-dimensional evaluation system was constructed using three indicators: novelty, impact, and growth rate, capturing different aspects of emerging technologies. The CRITIC method was employed to objectively assign weights to each dimension, enhancing the robustness and balance of the composite index. BERTopic, which integrates BERT-based semantic embeddings with HDBSCAN density-based clustering, improves both the coherence and granularity of topic extraction. For trend prediction, an Improved Whale Optimization Algorithm (IWOA) was introduced to fine-tune BiLSTM's learning rate, epoch count, and hidden layer size. IWOA enhances global optimization through Gaussian chaos initialization, Levy flight strategy, nonlinear control factors, and elite reverse learning. [Results/Conclusions] Experimental results show that BERTopic achieves superior topic coherence compared to baseline models and successfully identifies five emerging technical areas, including Intelligent Models and Algorithms, Information Processing, Deep Neural Networks, Smart Robotics, and Numerical Control Systems. The IWOA-BiLSTM model outperforms conventional LSTM and BiLSTM models in error metrics (MAPE, RMSE, and MAE), confirming its predictive advantage. Forecast results indicate that these emerging topics will experience sustained growth over the next five years, reflecting strong application potential and industrial value. This study confirms the feasibility and effectiveness of the integrated "identification–prediction" framework, providing a data-driven tool for strategic decision-making in science and technology development. Limitations include dependence on data quality and a current focus on the field of AI. Future research should expand the framework to other strategic areas, such as renewable energy, biomedicine, and intelligent manufacturing, to further validate its generalizability.

  • ANBo
    Journal of library and information science in agriculture. 2025, 37(12): 81-94. https://doi.org/10.13998/j.cnki.issn1002-1248.25-0422

    [Purpose/Significance] Although traditional Chinese Medicine (TCM) classics contain valuable knowledge they remain difficult to process automatically due to their complex page layouts, coexistence of traditional and simplified variant characters, alias-rich terminology, and strong cross-paragraph semantic dependencies. Existing pipelines often split the processes of optical character recognition (OCR), normalization, entity recognition, relation extraction, and entity alignment. This leads to error propagation. Additionally, many studies also focus on modern clinical texts rather than historical sources. This paper addresses these gaps by presenting an end-to-end pipeline that transforms ancient page images to a structured knowledge graph. The central contribution is the CoTCMKE, which is a chain-of-thought (CoT) and ontology-constrained joint model that performs named entity recognition (NER), relation extraction (RE), and entity alignment (EA) simultaneously. By making intermediate reasoning explicit and binding predictions to a TCM ontology, the framework improves batch digitization efficiency, extraction accuracy, and interpretability for digital humanities and library & information science (LIS) applications. [Method/Process] We built a unified pipeline with three steps. 1) Text recognition: a multimodal large language model (MLLM) recognizes text directly from complex pages with mixed vertical/horizontal layouts and performs context-aware traditional-to-simplified conversion. 2) Ontology construction: following semantic completeness, multimodal friendliness, evolvability, and interoperability, experts curate an ontology of core TCM concepts (e.g., diseases, symptoms, formulae, herbs) with aliases and constraints to guide decoding and ensure consistency. 3) Knowledge extraction: CoTCMKE integrates CoT with ontology constraints for multi-task extraction, which is entity localization and normalization, ontology-consistent relation generation, and cross-passage/cross-volume entity alignment. Constraint-aware decoding uses immediate checks and backtracking when a generated entity or relation violates ontology rules or alias mappings. For data, we used Shang Han Lun. Qwen2.5-VL-32B assists OCR, conversion, and initial auto-labeling; two TCM-trained annotators independently review and reconcile results. The final sets contain 2 340 NER items, 1 880 RE items, and 450 EA pairs, evaluated with 10-fold cross-validation. The multimodal large language model (MLLM) was adapted via LoRA with early stopping. The comparisons include traditional deep models, a unified IE framework, prompt-only inference, and a LoRA-SFT baseline. [Results/Conclusions] On Shang Han Lun, CoTCMKE outperformed LoRA-SFT by +3.1 F1 for NER, +1.6 for RE, and +1.3 for EA. In cross-book transfer to Jin Kui Yao Lue, the model maintained stable performance without retraining, indicating robustness and scalability. Ablation results showed that CoT reduced boundary and ambiguity errors, while ontology constraints curbed illegal triples and alias fragmentation. Combining both yielded the best overall results. The analysis yielded the following observations. 1) explicit medical relation templates act as semantic guardrails; 2) proactive alias consolidation before decoding reduces entity scattering and improves alignment; 3) explicit type-path guidance helps disambiguate fine-grained categories (e.g., pulse findings vs. general symptoms). The framework supports the automatic construction of "formula-symptom-herb" triples, as well as alias and variant normalization. It also supports evidence-linked semantic searches and navigation, which benefit LIS workflows, education, and research. Current limitations include the scope of the curated ontology and its focus on two classics. Future work will extend to additional TCM classics and broader historical corpora, support continual incremental learning, and deliver knowledge services based on the constructed graphs.

  • XIAO Yufan, CHEN Rui, HUANG Ying
    Journal of library and information science in agriculture. 2025, 37(6): 87-101. https://doi.org/10.13998/j.cnki.issn1002-1248.25-0349

    [Purpose/Significance] As the knowledge economy grows and global technological competition intensifies, universities have become essential drivers of innovation within national innovation systems. Not only do high‑level research universities generate original scientific discoveries, they also serve as catalysts for technological innovation and drivers of industrial upgrading. Their roles span from conceiving breakthrough ideas to shepherding technologies through product development and into marketable applications. Nevertheless, the multifaceted nature of these contributions remains insufficiently characterized, making it difficult for policymakers and university leaders to fine‑tune strategies that maximize impact. A comprehensive understanding of how universities contribute at each stage of the innovation continuum is therefore vital for optimizing their functions, informing targeted policy interventions, and reinforcing the synergetic linkages between academia, industry, and government in both national and global contexts. [Method/Process] To clarify universities' distinct contributions at each stage of innovation, this study presents an innovation value chain model and corresponding analytical framework that systematically maps their core functions - serving as knowledge innovators during basic research, technology developers in applied research, transfer agents in product development, and academic entrepreneurs in commercialization. Based on this model, we constructed an analytical framework comprising qualitative and quantitative indicators tailored to capture university activities at each stage. During the basic research phase, metrics such as publication volume, citation impact, and basic science funding shed light on the roles of universities as innovators of knowledge. During applied research, patent filings, joint industry‑university project counts, and collaborative R&D expenditure serve as proxies for technology development capacity. The product development phase assessment centers on technology licensing volume, spin‑off formation rate, and prototype demonstration projects to gauge technology transfer effectiveness. Finally, commercialization was examined via start‑up success rates, venture funding attracted, and market penetration of university‑originated products. Empirical analysis was conducted on representative samples drawn from China's C9 League universities and the U.S. Ivy League universities, leveraging bibliometric databases, patent offices, and institutional reports to ensure data robustness. [Results/Conclusions] The findings demonstrate that universities in China and the U.S. play distinct yet complementary roles at different innovation stages. Chinese universities exhibit rapidly growing research outputs and increasing basic research capability, signaling a powerful catching‑up momentum in building technological reserves. Their strengths lie primarily in knowledge generation and early‑stage technology development, supported by substantial increases in R&D investment and talent cultivation. In contrast, U.S. universities maintain leadership in original innovation quality and commercialization efficiency, underpinned by high‑impact publications, a mature ecosystem of technology transfer offices, and established venture funding networks. They excel at translating research breakthroughs into market‑ready products and ventures, achieving higher license income per patent and greater market penetration. This comparative analysis underscores the necessity of diverse, stage‑specific university roles and highlights opportunities for cross‑border learning. In the future, Chinese higher education institutions (HEIs) can enhance their commercialization performance by adopting proven U.S. strategies, such as streamlined intellectual property policies, incentive programs for faculty entrepreneurship, and extensive industry partnerships, while adapting these practices to local contexts. By doing so, they can improve the quality and market depth of their knowledge and technology outputs, and optimize the university technology transfer system, thereby providing robust support for achieving sustainable, high‑quality economic development.

  • LIUYihan, CHUYuxia, ZHAIYujia
    Journal of library and information science in agriculture. 2025, 37(12): 20-35. https://doi.org/10.13998/j.cnki.issn1002-1248.25-0556

    [Purpose/Significance] Short video platforms have become the core arena for the digital presentation and dissemination of intangible cultural heritage (ICH). However, the "Matthew Effect" in the digital attention economy often causes high-quality ICH content to be submerged. Existing research predominantly suffers from "modal segmentation," focusing on single modalities such as text and visuals in isolation, which fails to explain how these elements synergistically drive user engagement. To address this gap, this study constructs a communication effect evaluation model based on multimodal machine learning. The innovation of this research lies in integrating computational communication methods with traditional persuasion theories, moving beyond simple content analysis to a quantifiable predictive framework. By identifying key influencing factors through data fusion, this study provides a scientific basis for optimizing the digital production strategies of the ICH content, offering significant value for enhancing the visibility of traditional culture and overcoming the barriers of digital dissemination. [Method/Process] This study integrates the elaboration likelihood model (ELM) and media ritual theory to establish a "cognitive-behavioral-cultural" dual-path analytical framework. Theoretically, the study maps content quality (video/audio/text) to the "Central Route" and source credibility (author attributes) to the "Peripheral Route." Empirically, focusing on ICH videos on Douyin as the subject, the study collected data from May 2024 to May 2025. After rigorous cleaning, a dataset of 2,869 valid samples was established. The study employs a multimodal feature engineering approach: visual and textual features are extracted to represent content quality; audio features (including FBank and MFCC) are processed using the OpenSMILE toolkit to capture prosodic and spectral characteristics; and author data are collected to quantify social influence. The Random Forest algorithm is utilized to fuse these heterogeneous data sources, analyze feature importance, and predict communication effectiveness. [Results/Conclusions] The empirical results demonstrate that the multimodal fusion model significantly outperforms single-modality approaches in predicting communication effects, confirming that ICH dissemination is a result of complex symbol interaction. Feature importance analysis reveals a distinct hierarchy: Author attributes make the highest contribution, indicating that the "Peripheral Route" - driven by the creator's social capital - is the decisive factor in determining communication heat. Its persuasive power far surpasses that of the content itself. Regarding content modalities, text and video follow in importance, serving as critical tools for user retention, while the audio modality holds supplementary semantic value by setting the emotional atmosphere. The study does not account for dynamic temporal changes or external trending events. Effective ICH dissemination requires a synergistic strategy: prioritizing the accumulation of the author's social influence as the core driver, while simultaneously optimizing visual and textual quality. Future research should incorporate time-series analysis to capture dynamic communication trends.

  • JIANG Enbo, QIN Yu
    Journal of library and information science in agriculture. 2025, 37(10): 4-21. https://doi.org/10.13998/j.cnki.issn1002-1248.25-0338

    [Purpose/Significance] As artificial intelligence (AI) systems are being widely deployed across diverse domains such as education, healthcare, and public governance, the absence of standardized metadata specifications has led to fragmented descriptions, inconsistent documentation, and difficulties in model evaluation and reuse. This study aims to address the pressing issues of opacity, lack of interpretability, and poor traceability in current AI models, which have increasingly become obstacles to the development of transparent and responsible AI. To overcome these challenges, this study proposes the establishment of a unified metadata specification for AI models to enhance their discoverability, transparency, interoperability, and reusability, thereby advancing the development of trustworthy AI and facilitating effective model governance. [Method/Process] Grounded in metadata quality assessment theory and lifecycle theory, the study adopted a combination of research methods, including literature review, comparative analysis of existing specifications, and questionnaire surveys.We first conducted a systematic examination of domestic and international practices related to AI model metadata specifications to identify representative standards, frameworks, and implementation approaches. Through comparative analysis, the study investigated the structure, element organization, and semantic relationships of different specifications, highlighting their similarities, differences, and areas for improvement. Meanwhile, a targeted questionnaire survey was administered to researchers, developers, and practitioners to explore user awareness, perceptions, practical experiences, and specific needs regarding metadata specification and interoperability. Based on these findings, the study ultimately proposed a lifecycle-oriented framework for metadata specification construction, ensuring that it aligns with the key stages of AI model development, deployment, evaluation, and governance. [Results/Conclusions] The findings reveal that, although users generally recognize the importance of metadata specifications for AI models, they are unaware of of the existing specifications. The current AI model metadata specifications have significant shortcomings in terms of element naming, structural organization, and descriptive granularity. These shortcomings hinder the effective sharing and reuse of model information. In response, the study proposed a comprehensive metadata framework encompassing key entities such as models, datasets, algorithms, technical features, performance evaluations, risks and ethics, legal information, and related resources, as well as the semantic relationships among these entities. The research concluded that establishing a unified metadata specification for AI models not only contributes to effective information management and cross-platform interoperability, but also serves as a critical infrastructure that links technology, ethics, and governance. As the metadata specification system matures and gains wider industry adoption, AI models will become increasingly controllable and trustworthy. This will promote a more regulated, collaborative, sustainable and integrated AI ecosystem.

  • LIUTing, LIUShuhan, LIUZhenyan, ZENGDequan, HUYuan
    Journal of library and information science in agriculture. 2025, 37(9): 32-48. https://doi.org/10.13998/j.cnki.issn1002-1248.25-0408

    [Purpose/Significance] In the digital age, data elements have become a key factor in production, while insufficient bargaining power in the supply chain poses significant operational risks to enterprises. How to leverage the opportunities of the digital economy, maximize the role of data elements, and avoid operational risks caused by insufficient discourse power in the supply chain has become a key issue that enterprises urgently need to address. Investigating how data utilization enhances this power is vital for building resilient supply chains and informing governance decisions. This method is also effective for further utilizing data elements. It provides micro evidence that helps us understand how data elements can optimize resource allocation and empower organizational decision making. [Method/Process] This study employs a rigorous, empirical approach using panel data from China's A-share listed companies from 2003 to 2022. A two-way fixed effects model serves as the primary estimator to control for unobserved heterogeneity. To credibly address potential endogeneity issues, such as reverse causality and sample selection bias, we implemented a comprehensive identification strategy. This methodology incorporates the use of instrumental variables, Heckman's two-stage correction model, and a series of robustness checks including alternative variable constructions and sub-sample analyses. Furthermore, we conducted mechanism analysis to elucidate the transmission channels and heterogeneity analysis to examine conditional effects across different types of firms. [Results/Conclusions] The empirical results demonstrate that the improvement of data element utilization level can effectively strengthen a firm's supply chain bargaining power and reduce the dependence of enterprises on large suppliers and customers, enhance their bargaining power and influence in the supply chain. This conclusion still holds true after robustness tests such as replacing the regression model, adding control variables, and adjusting the sample period. Mechanism analysis results indicate that the utilization level of data elements primarily empowers supply chain discourse through two channels: improving supply chain efficiency and alleviating financing constraints. Firstly, data elements optimize the inventory management, logistics scheduling, and supply chain collaboration of enterprises, improving operational efficiency and reducing dependence on key suppliers and customers. Secondly, data elements improve the information transparency of enterprises, reduce external financing costs, enhance the liquidity of funds, and make them more autonomous and bargaining power in supply chain transactions. A heterogeneity analysis revealed significant differences in the empowering effects of data elements among different types of enterprises. Among them, data elements have a more significant effect on enhancing the discourse power of supply chain for non-labor-intensive and non-asset-intensive enterprises, as well as a stronger promotional effect on non-technology-intensive and non-high-tech industry enterprises. This suggests that companies that rely less on traditional physical resources are better able to use data to gain a competitive advantage. This study establishes a robust theoretical basis for data-driven supply chain management and presents significant policy implications. One limitation is its focus on listed companies. Future research could expand this inquiry to include small and medium-sized enterprises and global supply chain contexts.

  • TANGFeng, FANGXiangming, WANGYixin
    Journal of library and information science in agriculture. 2025, 37(11): 47-61. https://doi.org/10.13998/j.cnki.issn1002-1248.25-0462

    [Purpose/Significance] The digital characteristic collections of libraries are facing significant challenges in terms of data circulation and value, which greatly limits their potential utility. To address these issues, this study proposes to establish a trusted data space specifically designed for the digital special collections of libraries. The main objective is to reduce the costs related to trust and promote the full utilization of its multi-dimensional value in areas such as cultural heritage protection, academic research, industrial innovation, and social education. By creating a secure and interoperable environment for data sharing, the plan aims to transform the way digital special collections are managed, accessed and utilized, thereby enhancing their contribution to broader social goals. [Method/Process] This study centers on the trusted data space to explore the cross-domain circulation and value release mechanism of digital specials. It aims to build a dedicated and trusted data space for libraries, break down data barriers, and activate multi-dimensional value. The investigation follows a structured approach centered on requirements analysis, framework construction and strategy formulation. This research is based on the concept and technical foundation of the trusted data space, taking into account the unique attributes and sharing requirements of digital special collections. A comprehensive theoretical framework has been developed and centers around three core capability streams: resource interaction, trusted governance, and value co-creation. These flows are supported by a five-layer architecture model: infrastructure, data interaction, data elementization, intelligent services, and value realization. To illustrate the practical application of this framework, typical usage scenarios were analyzed to demonstrate how special collected data can be transformed from raw resources into valuable assets, and the characteristics and key tasks of specific stages were examined in detail. In addition, a multi-faceted implementation strategy has been proposed to address real-world challenges, including stakeholder reluctance, technological heterogeneity, and conflicts in rights management. These strategies emphasize the development of intelligent resources, the integration of multi-modal and heterogeneous technologies, policy incentive mechanisms, and the establishment of a sound data element market. [Results/Conclusions] The trusted data space proposed in this paper provides a systematic and effective solution for the trusted circulation and efficient utilization of cultural data. It transforms digital characteristic collections into open and reusable assets, thereby significantly enhancing the quality and scope of public cultural services. This development is in line with and supports the national strategic goals of building a "cultural power" and a "Digital China". Looking ahead, future research should prioritize the shift from theoretical conceptualization to practical implementation. This includes integrating technical solutions with actual service workflows and clarifying the unique role of libraries in the broader data ecosystem. To ensure long-term success and influence, key challenges such as sustainable business models and scientific and reasonable evaluation mechanisms must be addressed.

  • ZHAOHui, CHENJinghao, GUOSha, LIZhixing, YANLongfei
    Journal of library and information science in agriculture. 2025, 37(11): 4-29. https://doi.org/10.13998/j.cnki.issn1002-1248.25-0729

    In the digital economy era, the efficient, secure, and compliant circulation of cross-border data flow has become a key issue for the coordination of global industrial chains and the deepening of regional cooperation. It is a driving force for the high-quality development of the global digital economy. Currently, cross-border data flow is confronted with multiple challenges, including the interweaving of driving forces and contradictions, inadequate adaptation between mechanisms and technologies, and poor connection between compliance requirements and practical implementation. There is an urgent need to formulate systematic solutions from both theoretical and practical perspectives. To this end, this journal has invited five experts from universities and enterprises to organize a roundtable discussion on the complete logical chain of "the underlying logic, mechanism construction, trend prediction, compliance governance, and scenario-based implementation of cross-border data flow". The key viewpoints are as follows: 1) Dynamic Mechanism and Governance Logic of Cross-border Data Flow: Cross-border data flow is jointly driven by three major forces: economic interests, technological innovation, and international cooperation. Meanwhile, it faces core contradictions including the trade-off between sovereign security and flow efficiency, fragmentation of rules and institutional coordination, and technological balance and the digital divide. It is necessary to establish a governance philosophy of "dynamic balance" and build a multilateral co-governance system through three types of tools-algorithm-based supervision, technology empowerment, and institutional experimentation-to promote the shift from "fragmented rule-based games" to "systematic coordination". 2) Construction of a Collaborative Mechanism for Cross-border Data Flow: The mechanism for cross-border data flow needs to break through the limitations of a single dimension and form a multi-dimensional collaborative system integrating "policy, technology, and industry". At the policy level, regulatory sandbox pilots, standard mutual recognition, and compliance infrastructure sharing are adopted to address regulatory barriers. At the technical level, scenario-specific needs are met based on a maturity gradient, and the integrated innovation of "technology + management" is promoted. At the industry level, the self-regulatory role of professional fields such as library and information science (LIS) is leveraged to compensate for the rigidity of policies and build a closed-loop governance structure. 3) Trend Evolution and Risk Resilience of Cross-border Data Flow: In the next 3 to 5 years, cross-border data flow will exhibit characteristics of structural growth and domain differentiation. Smart manufacturing and digital trade will drive growth on a large scale, while smart healthcare and modern agriculture will emerge as core sectors. It is imperative to address bottlenecks in infrastructure upgrading and the impact of "black swan" events, establish a risk resilience system from technical, governance and strategic dimensions, and promote service model innovation in LIS as well as advance layout in the agricultural sector. 4) Compliance Governance and China's Path for Cross-border Data Flow: China has established a hierarchical and classified governance framework centered on three fundamental laws, and explored practical paths through institutional innovations such as the negative list system in free trade pilot zones. To tackle challenges including discrepancies in legal compliance requirements, technical barriers, and the complexity of regulatory coordination, it is necessary to strengthen legal synergy and rule mutual recognition, advance infrastructure construction and technological innovation, and improve the compliance service support system, thereby forming a China-specific path that balances security and controllability with high efficiency and convenience. 5) Practice of Cross-border Data Circulation and Credit Product Mutual Recognition: Cross-border data circulation lays a core foundation for the cross-border mutual recognition of credit products, which holds significant strategic value for promoting the facilitation of international trade and supporting the international development of enterprises. Currently, it faces challenges such as data security compliance, standard discrepancies, and high technical costs. To advance the implementation of cross-border mutual recognition of credit products, efforts should be made to improve the legal and regulatory framework and standard system, strengthen the construction of technical infrastructure, deepen international cooperation and mutual recognition mechanisms, and cultivate international credit service institutions.

  • GENG Ruili, WANG Yifan, LI Sentao, WEI Qi
    Journal of library and information science in agriculture. 2025, 37(6): 20-36. https://doi.org/10.13998/j.cnki.issn1002-1248.25-0322

    [Purpose/Significance] Open government data (OGD) has increasingly adopted storytelling elements to improve public engagement and enhance user comprehension. Although this narrative approach enhances data accessibility and cognitive resonance, it raises significant privacy concerns. Specifically, storytelling may activate users' cognitive schemas, enabling them to infer sensitive personal information even from anonymized datasets. This dual effect between data usefulness and privacy risk is becoming an increasing challenge for data providers and policymakers. In this study, we aim to explore how storytelling in OGD affects users' cognitive reasoning processes and leads to privacy risks. Our work innovatively combines cognitive psychology, information science, and privacy risk assessment. This interdisciplinary approach offers a new perspective on how data narratives shape inference behavior. Distinct from existing research, this paper focuses on how cognitive mechanisms driven by storytelling influence users' perception and extraction of private information. This research holds practical significance for designing privacy-aware data disclosure strategies that strike a balance between openness and protection. [Method/Process] In order to analyze the cognitive mechanisms underlying privacy risk, we adopted a mixed-methods research design grounded in relevance theory, schema theory, and the S-O-R model. We first constructed a user cognitive connection model that conceptualized how narrative stimuli activated cognitive processing and led to privacy-related inferences. Based on this model, we developed a privacy risk assessment index comprising three primary dimensions: data association and reasoning, data processing and decoding, and implicit suggestion and implication. We then conducted a controlled experiment involving 236 participants, who were randomly divided into a storytelling group and a non-storytelling group. To analyze the collected data, we used the CRITIC method to assign objective weights to evaluation indicators and applied a fuzzy comprehensive evaluation method to quantify and compare privacy risks across groups. [Results/Conclusions] Our results demonstrated that storytelling significantly heightened users' ability to infer sensitive personal information. The average inference score in the storytelling group was significantly higher than that in the non-storytelling group (p<0.05), and the comprehensive privacy risk level was rated as "medium risk" compared to the non-storytelling group's "low risk." Across all three risk dimensions, the storytelling group consistently exhibited greater cognitive engagement and higher potential for privacy exposure. These findings suggested that while storytelling enhanced user understanding, it also increased the risk of privacy violations. As such, we recommended that government data platforms adopt non-storytelling or partially abstracted data presentation strategies to reduce risk while preserving clarity. From a policy perspective, we advocated for the integration of intelligent narrative-generation algorithms and privacy-by-design principles to protect users' information. Although limited by sample size and data diversity, this study offered a foundation for future research into the cognitive underpinnings of privacy risk. Further work may explore other forms of storytelling, demographic influences on inference behavior.