Home Browse Just accepted
Accepted

Accepted, unedited articles published online and citable. The final edited and typeset version of record will appear in the future.

Please wait a minute...
  • Select all
    |
  • ZHANG Yuxiang, CUI Lirui, XIN Chengguo
    Journal of library and information science in agriculture. https://doi.org/10.13998/j.cnki.issn1002-1248.25-0517
    Accepted: 2026-03-10

    [Purpose/Significance] Amid rising concerns over the commercialization of scholarly publishing and the financial burden of APC-based models and transformative agreements, diamond open access (Diamond OA) has gained attention as a non-profit, community-governed alternative. Current open science debates increasingly emphasize a shift from improving access to transforming the governance of knowledge production, often termed as "community over commercialization." In this context, Diamond OA is not merely a cost-free publishing option but a governance paradigm in which academic communities organize and sustain scholarly communication. This study positions Diamond OA within international discussions on open infrastructure, bibliodiversity, and equitable knowledge systems, and examines how its community-driven logic shapes goal setting, operational mechanisms, and evolutionary trends. It also explores how this governance logic generates structural tensions related to funding sustainability, infrastructural fragmentation, and evaluation regimes, with particular attention to implications for China. [Method/Process] The study employs a qualitative multi-method design integrating literature review, cross-regional case comparison, institutional analysis, and SWOT assessment. An analytical framework of "goal system - operational mechanisms - structural challenges - localization pathway" has been constructed to examine representative Diamond OA practices. Cases including SciELO, Redalyc, the Open Library of Humanities (OLH), and the Public Knowledge Project (PKP) are analyzed to identify four organizational archetypes: national or regional alliances, scholar-led community governance, technology-empowered infrastructures, and overlay publishing models. These cases illustrate how consensus decision-making, pooled resource governance, collaborative feedback, and trust-based quality control function as core operational mechanisms. The SWOT analysis further reveals the dynamic interaction between internal characteristics and external environmental conditions. [Results/Conclusions] The findings indicate that Diamond OA reorganizes scholarly publishing around community trust, shared responsibility, and public-interest orientation. It enables practices such as multilingual publishing, open peer review, and greater participation from non-English-speaking regions, thereby enhancing bibliodiversity and academic visibility. However, the model faces persistent structural constraints, including unstable funding, uneven technical capacity, and marginalization within evaluation systems dominated by commercial metrics. These challenges stem directly from its non-commercial and community-dependent nature. Internationally, Diamond OA initiatives show trends toward more structured governance networks, interoperable open infrastructures, and cross-regional collaboration. In China, despite advances in open science policy and infrastructure, Diamond OA development remains fragmented, with unclear community roles and rigid evaluation constraints. Rather than replicating international models, this study proposes a localized "four-wheel drive" framework - policy coordination, community governance, infrastructural empowerment, and evaluation reform - to integrate Diamond OA into China's scholarly communication system. This framework contributes to global discussions by demonstrating how community governance can be adapted within a state-coordinated context and offers practical guidance for developing sustainable and equitable open access practices.

  • MOJingshi
    Journal of library and information science in agriculture. https://doi.org/10.13998/j.cnki.issn1002-1248.25-0636
    Accepted: 2026-03-09

    [Purpose/Significance] The rise of Generative Artificial Intelligence (AIGC) has made "prompt literacy" a crucial skill for effective human-AI interaction. However, there are significant gaps in public competency that risk widening the digital divides Libraries, as foundational institutions for literacy and access, are ideally positioned to address this need. This study aims to clearly define the core roles of libraries in cultivating public prompt literacy and to develop a practical, actionable framework to guide their efforts in the AIGC era, thereby enhancing their social relevance and service impact. [Method/Process] This research employs a qualitative, multi-stage approach. First, a comprehensive literature review was conducted to analyze and synthesize the theoretical conception and multi-dimensional structure of prompt literacy. Second, through a strategic analysis of libraries' inherent functions and societal mandates, the study systematically proposes a tripartite role orientation. Third, building on this role definition, an integrated practical framework was constructed. This framework synthesizes insights from library science, educational design, and technology ethics, and is informed by an examination of early innovative practices from libraries globally, moving from conceptual roles to actionable strategies. [Results/Conclusions] The study concludes that to effectively foster public prompt literacy, libraries must consciously adopt and integrate three core roles. First, as an educational guide, libraries must transition from information providers to facilitators of critical thinking and technical skill-building, specifically in human-AI collaboration. Second, as technology adapters, they must act as crucial intermediaries, assessing, curating, and sometimes tailoring AI tools to lower access barriers and meet diverse user needs. Third, as an ethical guardian, they have a responsibility to navigate the risks associated with AIGC, such as misinformation, bias, and privacy concerns, thereby fostering a trustworthy information environment. From this integrated role orientation, a detailed four-dimensional practical path is formulated. 1) Resource construction involves building a multi-layered support system, including a repository of reusable prompt templates for common and discipline-specific tasks, as well as educational materials highlighting ethical pitfalls and case studies. 2) A hierarchical education system requires the design and delivery of differentiated instructional programs. These programs range from gamified workshops for youth and students, to advanced, discipline-integrated training for researchers and professionals, and from patient, needs-based, low-barrier tutorials for seniors to programs for the digitally disadvantaged. 3) Service integration emphasizes the importance of seamlessly embedding prompt literacy support into core library services and user workflows. This includes integrating prompt design assistance into research consultations, embedding literacy modules into academic course curricula in partnership with faculty, and demonstrating AIGC applications in everyday life through community programs. 4) Ethical regulation requires the operationalization of ethical principles through explicit policies for library AI use, transparent communication with users about AI-assisted services, the development of ethical checklists and assessment tools, and the fostering of community dialogue on AI ethics. This comprehensive framework gives libraries a strategic roadmap for translating the importance of early prompt literacy development into practical, long-lasting, services. Implementing this approach allows libraries to strengthen their public education mission in the digital age, establish themselves as vital and adaptable community hubs, and play a pivotal role in fostering a more literate, equitable, and ethically conscious society amid rapid AI advancements. Future research could focus on assessing the impact of these interventions and identifying the skills necessary for librarians to fulfill these new roles successfully.

  • CHEN Yuanyuan, HU Shaohuang
    Journal of library and information science in agriculture. https://doi.org/10.13998/j.cnki.issn1002-1248.25-0536
    Accepted: 2026-03-05

    [Purpose/Significance] Disruptive technology identification has become an increasingly important research topic in the context of rapid technological evolution and strategic decision-making for governments and enterprises. However, existing data-driven identification approaches often suffer from two critical limitations. First, disruptive technology datasets are typically characterized by severe class imbalance, where truly disruptive cases constitute only a small fraction of the total samples, leading to biased learning and poor generalization. Second, most existing studies rely on a single machine learning model, which limits the ability to capture complex and heterogeneous patterns embedded in high-dimensional technical text features. These issues restrict the robustness, accuracy, and practical applicability of current identification frameworks. To address these challenges, this study aims to construct an optimized disruptive technology identification model that jointly considers data imbalance mitigation and model performance enhancement, thereby improving the reliability and stability of predictive results and contributing to methodological advancements in technology intelligence and innovation management research. [Method/Process] Based on the reproduction of a widely used baseline model built upon XGBoost, this study proposed a two-stage optimization framework integrating data resampling and ensemble learning. In the data preprocessing stage, a hybrid SMOTE-ENN sampling strategy was employed to reconstruct the training dataset. The SMOTE component synthetically generated minority class samples to enhance class representation, while the ENN component removed ambiguous and noisy samples from overlapping regions, thus achieving a balance between noise reduction and information preservation. This strategy effectively alleviated the adverse impact of class imbalance on model learning without excessively distorting the original data distribution. In the modeling stage, a stacking-based ensemble learning framework was constructed by integrating multiple heterogeneous base learners, including XGBoost, LightGBM, Extra Trees, and Support Vector Machines. These base models were selected to capture complementary decision boundaries and feature interactions from different learning perspectives. A Random Forest model was further employed as a meta-learner to aggregate the outputs of the base learners and perform higher-level feature integration. Through this hierarchical learning mechanism, the proposed framework enhanced both representation capability and predictive robustness, enabling more accurate identification of disruptive technologies under complex and noisy data conditions. [Results/Conclusions] Extensive experimental evaluations demonstrate that the proposed optimization model significantly outperforms the baseline XGBoost model across multiple core performance metrics, including Accuracy, Precision, Recall, and F1-Score. Notably, the F1-Score, which is substantially improved from 0.63 to 0.98, indicates a marked enhancement in the model's ability to correctly identify minority disruptive technology samples while maintaining high overall stability. The results confirm that the combined application of hybrid resampling and ensemble learning effectively addresses the challenges of sample imbalance and model bias in disruptive technology identification tasks. In conclusion, the proposed framework provides a robust and scalable solution for identifying disruptive technologies in high-dimensional, imbalanced data scenarios. Beyond improving prediction accuracy, this study offers methodological insights for technical text modeling and innovation analytics. Its approach can be easily adapted to other fields with similar data imbalance and complexity issues. Future research may further explore adaptive sampling strategies and deep learning-based ensemble architectures to enhance temporal and semantic representation capabilities.

  • AN Lin
    Journal of library and information science in agriculture. https://doi.org/10.13998/j.cnki.issn1002-1248.25-0750
    Accepted: 2026-03-05

    [Purpose/Significance] The rapid advancement of generative artificial intelligence (AI) is driving societal digital transformation, yet it simultaneously poses unprecedented systemic risks to personal information security due to the large-scale, automated, and complex nature of its data processing. Previous research has lacked exploration of governance pathways that consider endogenous technological evolution and specific model iterations. This paper takes the technological evolution of mainstream, large-scale generative AI models, both domestically and internationally as a starting point, and systematically reveals the impact of generative AI on personal information protection principles across the stages of data collection, model operation, and content generation. The focus is on analyzing how technological innovations in China's DeepSeek, including open-source traceability, decision transparency, and flexible deployment, lay the groundwork for risk-graded governance. This study not only broadens the theoretical perspective on AI governance and promotes the formation of a "technology-institution" collaborative governance paradigm, but also offers innovative and actionable insights for building an agile and effective personal information protection system in China amidst the rapid adoption of generative AI. [Method/Process] This study employs a comparative analysis and inductive research approach. First, it systematically compares the core technological differences among mainstream generative AI models, both domestic and international, across three dimensions: model ecosystem, model capabilities, and deployment methods. Through this comparison, it analyzes the challenges generative AI poses to personal information protection at various stages, including data collection, model operation, and content generation. Second, the study systematically examines the differentiated impacts brought about by DeepSeek's technological iterations on personal information security governance. Building on this foundation, the research proposes a comprehensive governance strategy centered on the principles of inclusiveness and prudence, guided by risk grading, and covering all operational stages of generative AI. This strategy emphasizes the critical role of DeepSeek's technical characteristics in supporting the implementation of this framework. [Results/Conclusions] The research indicates that constructing a risk-graded governance system based on the sensitivity of personal information is an effective approach to balancing security and innovation in generative AI. This system emphasizes distinguishing between sensitive and general information during data collection, achieving traceability and purpose control during model operation, and implementing differentiated security safeguards during content generation. With its technical advantages, including open-source traceability, decision transparency, and flexible deployment, DeepSeek provides technical validation and practical possibilities for graded governance. This facilitates the protection of sensitive personal information in high-risk scenarios while simultaneously fostering technological iteration and application innovation in medium- to low-risk contexts. Future research should further incorporate multi-dimensional governance elements such as industry self-regulation, social coordination, and international collaboration. Empirical analysis should also be conducted to test the applicability and effectiveness of the governance framework, thereby gradually developing a well-rounded personal information security governance scheme that adapts to the dynamic evolution of technology.

  • WANGChao, CHENJie, HOUHui
    Journal of library and information science in agriculture. https://doi.org/10.13998/j.cnki.issn1002-1248.25-0730
    Accepted: 2026-02-13

    [Purpose/Significance] Against the global surge of generative artificial intelligence (GenAI) and large language models (LLMs), academic libraries are undergoing a critical paradigm shift in their reference services. While "AI Virtual Librarians" (AIVL) are increasingly adopted to enhance efficiency, cross-national evidence regarding how they are configured alongside traditional "Human Live Reference" (HLR) remains scarce. This study aims to reveal the structural differences in human-AI configurations between Chinese and international top-tier university libraries. It seeks to identify the divergence between "technology-driven" and "human-centric" service models and proposes a governance-oriented hybrid pathway to inform the digital transformation of academic libraries. [Method/Process] The study established two high-resource samples: 42 libraries from China's "Double First-Class" universities and 94 libraries from the U.S. News Top 100 World Universities. A systematic website investigation and standardized interaction tests were conducted to collect data on service availability and deployment models. The study not only quantified the deployment of HLR and AIVL (classified into rule-based and LLM-based) but also qualitatively evaluated the "Core Service Contents" and "Linkage Mechanisms" (e.g., traceability, boundaries, and human fallback). Chi-square tests were employed for statistical analysis, and robustness checks were performed using both broad and strict counting rules to ensure validity. [Results/Conclusions] Results indicate that while the overall service coverage is similar across groups (approx. 74%), the service structure diverges significantly. International libraries predominantly rely on the "Human-only" mode (66.0%), prioritizing deep research support, academic integrity, and privacy protection. In contrast, Chinese libraries show a significantly higher adoption of AIVL (57.1% vs. 8.5%) and LLMs (26.2% vs. 1.1%), with 52.4% operating in an "AI-only" mode. Content analysis reveals that Chinese AIVLs focus on transactional efficiency and 24/7 accessibility, whereas international counterparts focus on distinct research guides and governance. The study identifies a critical trade-off: China's aggressive AI adoption enhances accessibility but faces challenges regarding answer hallucinations and the lack of human fallback mechanisms. To address these challenges, the paper recommends a "Human-AI Collaborative Loop" model. Key strategies include: 1) Implementing risk-tiered routing, where low-risk transactional queries are handled by AI and high-risk research inquiries are directed to humans; 2) Optimizing AI reliability through Retrieval-Augmented Generation (RAG) and controlled knowledge bases to ensure traceability; 3) Establishing clear governance boundaries and stratified implementation paths for libraries with different resource levels, ensuring a balance between technological innovation and service ethics.

  • LIMei, YINMingzhang
    Journal of library and information science in agriculture. https://doi.org/10.13998/j.cnki.issn1002-1248.25-0735
    Accepted: 2026-02-12

    [Purpose/Significance] As digital technologies such as 5G and generative AI become more prevalent in higher education, university libraries have evolved from traditional collections of books to ecosystems of cross-modal and multi-source resources, encompassing core collection resources, open-access resources, and user-generated content. However, the "resource silo" issue caused by heterogeneous resources and the mismatch between passive services and dynamic user scenarios in research and teaching remain unresolved. Existing studies lack integrated closed-loop mechanisms linking resources, scenarios, and users. This study aims to address these gaps by promoting libraries' transformation from "resource storage centers" to "proactive knowledge service centers." Its key innovation lies in constructing a scenario-driven three-dimensional collaborative model, which bridges the disconnect between resource integration and scenario adaptation, providing theoretical and practical support for intelligent library development. [Method/Process] Guided by ERG demand theory and context-aware computing, this study adopts a mixed-methods approach combining literature research, technical design, and case validation. A three-dimensional collaborative model of "Resource Integration - Scenario Adaptation - Smart Services" was proposed. For resource integration, a "three-dimensional integration + four-step fusion" framework was developed: standardized access via unified DCAT-AP/RDA metadata and multi-protocol gateways, associative reorganization through cross-modal semantic matching and knowledge graph aggregation, and hierarchical storage (hot/warm/cold tiers). The four-step fusion includes data preprocessing, modality conversion (ViT, Whisper-large, YOLOv8 models), feature fusion (attention mechanism + Transformer encoder), and knowledge generation (knowledge graphs, rule bases). An innovative five-dimensional dynamic scenario model (S=f(P,R,S,T,C)) quantifies user profiles, resource attributes, spatial locations, temporal contexts, and social connections for precise scenario identification. Technically, a "cloud-edge-device" architecture provides support, while a hierarchical service pathway (instant/in-depth/customized services) and a multi-dimensional evaluation system (resource/service/user dimensions) ensure closed-loop optimization. [Results/Conclusions] The model effectively achieves in-depth integration of multi-source cross-modal resources and precise scenario adaptation. Validated through typical applications - full-cycle research support and immersive teaching (VR ancient book restoration, MR anatomy demonstration) - it significantly enhances resource utilization efficiency and user experience, resolving the core pain point of resource-scenario disconnection. The model strongly supports libraries' transformation from passive resource supply to proactive knowledge services. Limitations include limited application of cross-modal technologies to virtual reality resources, insufficient coverage of management and social service scenarios, and the need for long-term validation of the evaluation system. Future research will deepen large-model-aided cross-modal fusion, expand scenario coverage, improve the evaluation system with third-party participation, and promote inter-university resource sharing to better support higher education development.

  • WU Yuhao, ZHOU Zhigang, LIU Wei
    Journal of library and information science in agriculture. https://doi.org/10.13998/j.cnki.issn1002-1248.25-0727
    Accepted: 2026-02-12

    [Purpose/Significance] As the core hub for public cultural services and the inclusive dissemination of knowledge, the digital transformation of smart libraries is accelerating continuously. However, they also face multiple digital risks such as data fragmentation, insufficient technological adaptation, and prominent system vulnerabilities, which seriously constrain the stability and sustainability of public cultural services. The construction of digital resilience has become a key support for smart libraries to respond to environmental changes and ensure the realization of core functions. This paper focuses on the sustainable development demands of smart libraries in the digital age. Based on the dual-wheel drive perspective of "data elements-digital technology", it explores the generation logic and improvement path of digital resilience. This approach can not only provide a new dimension for improving the theoretical system of digital risk governance in smart libraries, but also provide practical solutions to solve real problems such as data fragmentation and insufficient technical adaptation. Furethermore, it can enhance the stability and efficiency of public cultural services. [Method/Process] Supported by theories of data governance, technological innovation and organizational resilience, this research adopts a progressive approach of literature review, logical deconstruction, framework construction and path optimization, and integrates literature research methods, system deconstruction methods and logical deduction methods. We systematically analyze the penetration and impact of data elements and digital technologies on the resources, services, technologies, and organizational dimensions of smart libraries, clarify the correlation logic and operational mechanism between dual-wheel drive and digital resilience, construct practical approaches from two aspects: the release of data element value and the collaboration of digital technology clusters, and provide a multi-dimensional guarantee system. [Results/Conclusions] The core essence of digital resilience in smart libraries lies in their dynamic adaptation, efficient response, and continuous evolution capabilities in the face of digital risks. Its formation relies on the deep collaboration between data elements and digital technologies: Data elements, by building a multimodal collaborative data ecosystem, break down information silos and lay a solid resource foundation for digital resilience. Digital technology, relying on the collaborative efforts of technology clusters such as big data, artificial intelligence, and blockchain, has formed a full-cycle risk response technology system covering risk perception, emergency response, and system recovery. The coupled interaction between the two promotes a qualitative leap in digital resilience from passive risk resistance to active value creation, ultimately achieving a deep integration and development driven by data elements - digital technology-driven and resilience construction. Based on this, practical suggestions are put forward. Smart libraries should strengthen the standardized construction of data governance, promote the scenario-based application of technology clusters, and improve the cross-departmental collaboration mechanism.

  • ZHANGKeyong, WUShuang
    Journal of library and information science in agriculture. https://doi.org/10.13998/j.cnki.issn1002-1248.25-0701
    Accepted: 2026-02-12

    [Purpose/Significance] Against the backdrop of the digital wave and the Healthy China initiative, efforts to enhance national health information literacy face challenges, including an insufficient supply of high-quality popular science content and low public enthusiasm for its dissemination. This study aims to explore the internal driving forces, core influencing factors, and transmission paths of the willingness to share online health popular science information. It further intends to provide theoretical support for regulatory authorities and popular science platforms in formulating incentive policies and safeguard mechanisms, thereby promoting the participation of social entities in popular science dissemination, increasing the supply of high-quality popular science resources, and enhancing the health information literacy of the general public. [Method/Process] A three-stage research design of "Grounded Theory - Fuzzy DEMATEL - ISM" was adopted. Firstly, interview data from diverse groups were collected through semi-structured interviews. Grounded Theory was then applied to coding to extract initial influencing factors and construct a multi-dimensional driving force system. Secondly, Fuzzy DEMATEL was used to calculate the centrality and causality degrees, so as to identify key factors. Finally, the interpretive structural modeling (ISM) method was employed to integrate the influencing factors, establish a hierarchical structure, and clarify the transmission logic and action mechanism. This method not only enables the acquisition of the most original influencing factor system from interview materials but also reveals the interaction relationships among these factors, which is in line with the research requirements and trends in the field of information science. [Results/Conclusions] The results of Grounded Theory analysis identified 13 influencing factors, which are categorized into four dimensions. The personal dimension includes four factors: interpersonal interaction traits, perceived utility, health information literacy, and self-efficacy. The information dimension consists of four factors: information quality, information source credibility, information richness, and information clarity. The platform dimension comprises two factors: interaction promotion mechanism and platform technology. The social dimension contains three factors: social economy, social public events, and the clustering effect. Fuzzy DEMATEL analysis indicated that perceived utility, health information literacy, information clarity, and social economy are the key factors. ISM analysis revealed a 4-layer hierarchical structure of influencing factors from the superficial to the deep, with the social economy being the deepest-layer factor. Additionally, four key transmission paths were sorted out. Based on the research conclusions, four suggestions are proposed: Firstly, from the personal dimension, efforts should be made to mobilize the subjective role of users. Secondly, from the information dimension, the information quality and clarity for content creators and sharers should be improved. Thirdly, from the platform dimension, active cooperation with content sharers should be pursued and the interaction mechanism should be optimized. Finally, from the social dimension, the government should promote the development of the health popular science industry. In subsequent studies, empirical tests (such as structural equation modeling and fsQCA) can be incorporated to ensure the reliability and validity of the theory.

  • LVKun, YULinrong, WENYuzhu, LiBeiwei
    Journal of library and information science in agriculture. https://doi.org/10.13998/j.cnki.issn1002-1248.25-0519
    Accepted: 2026-02-12

    [Purpose/Significance] The governance of health medical data is fundamentally challenged by the "protection-sharing" paradox: the critical need to safeguard sensitive personal information often conflicts with the desire to utilize these data for public benefit. This issue is particularly pressing under China's "Healthy China" initiative, which promotes data sharing while the rapid expansion of medical APPs has led to increasing data misuse incidents. Existing research has extensively explored technological solutions such as blockchain, but a significant gap remains in understanding the dynamic, strategic interactions among the key stakeholders - government regulators, APP operators, and users - who operate with bounded rationality. This study addresses this gap by constructing a tripartite evolutionary game model. Its primary significance lies in dynamically modeling the co-evolution of strategies to identify critical leverage points, thereby providing a theoretical basis for designing effective collaborative governance mechanisms that can reconcile data protection with utilization and ensure the sustainable development of the health data ecosystem. [Method/Process] This study established a three-party evolutionary game model involving government regulators, medical-health APP operators, and users, based on the core assumption of bounded rationality. The model incorporated a comprehensive set of parameters, including direct benefits, various costs (compliance, regulatory), data risks, and network benefits under different regulatory scenarios. Replicator dynamic equations were derived for each party to mathematically describe the evolution of their strategy choices over time. The stability of the system's equilibrium points was rigorously analyzed using Lyapunov's first method to identify key stability thresholds. To validate the theoretical analysis and explore the dynamic evolutionary paths, numerical simulations were conducted using MATLAB. These simulations tested the impact and sensitivity of critical parameters - such as user-perceived data risk under operator self-discipline, user network benefits under dynamic regulation, government compliance rewards, and penalties for overdevelopment - from various initial strategy combinations. [Results/Conclusions] The analysis yielded several critical findings. First, users' authorization decisions are highly sensitive to the operational context, and they are significantly positively influenced by the perceived level of operator self-discipline and the observed intensity of government dynamic regulation. Enhancing user network benefits under effective regulation and reducing perceived data risks are paramount to encouraging authorization. Second, for APP operators, increasing government penalties for overdevelopment acts as a powerful deterrent, rapidly steering operators towards compliance. In contrast, government financial rewards for compliance, while effective, must be carefully balanced against their potential fiscal burden, which can slow the government's own stabilization into a dynamic regulatory role. Third, the system exhibits strong path dependence, capable of converging towards either an inefficient equilibrium (Non-Authorization, Overdevelopment, Passive Regulation) or the optimal Pareto state (Authorization, Self-discipline, Dynamic Regulation), depending heavily on initial conditions. The study concludes that resolving the paradox requires a multi-faceted strategy: advancing and ensuring robust anonymization technologies, implementing intelligent graded supervision that combines incentives and punishments, and firmly establishing institutional safeguards for user data sovereignty to build essential trust. A key limitation is the omission of data leakage risks from government data openness. Future work will integrate empirical data and consider user heterogeneity to refine the model.

  • JIANGJiping
    Journal of library and information science in agriculture. https://doi.org/10.13998/j.cnki.issn1002-1248.25-0739
    Accepted: 2026-02-12

    [Purpose/Significance] With the accelerated convergence of artificial intelligence and the metaverse, smart library information services are undergoing a profound transformation from tool-oriented functional optimization toward holistic cognitive support. Traditional information retrieval and service models increasingly struggle to explain and support complex cognitive activities involving multi-agent collaboration, contextual awareness, and continuous knowledge construction. From the perspective of human-machine-environment collaborative cognition, this study aims to explore the paradigm shift of smart library information services in intelligent digital environments and to establish an integrated theoretical framework that coordinates technological systems, cognitive processes, and contextual factors, thereby providing a systematic theoretical foundation for service model innovation and capability enhancement in smart libraries. [Method/Process] This study first reviews the evolutionary trajectory of information search paradigms - from symbolic computation and semantic understanding to social perception - through systematic literature analysis. We proposed Ecological Search as an emerging paradigm. Drawing on distributed cognition, embodied cognition, and information ecology theories, a human-machine-environment cognitive symbiosis search architecture was constructed, driven by a dual core of social multi-agent communities and contextualized metaverse environments. The architecture operates through an inner-outer dual-loop mechanism consisting of environmental perception and intention emergence, federated retrieval and knowledge fusion, collaborative generation and narrative construction, and cognitive evolution and ecological calibration. Furthermore, an "interaction-knowledge-context" three-dimensional analytical model was developed to decompose key service capabilities and derive differentiated integration pathways under diverse service objectives. [Results/Conclusions] The study proposed three smart library information service models: interaction-enhanced integration, knowledge-reconstructive integration, and context-immersive integration, and clarified how a unified cognitive architecture can be flexibly configured for different user groups and service scenarios. The findings indicate that the ecological search paradigm transcends system-centered instrumental rationality and reconceptualizes information search as a human-machine-environment collaborative process supporting continuous cognitive construction. By integrating multi-agent systems and contextualized environments, this paradigm provides essential mechanisms for smart libraries to move beyond information provision toward advanced cognitive support. The study offers theoretical insights and practical implications for achieving an ecological transformation of smart library information services while balancing technological innovation and human-centered values.

  • LI Jie, ZHANG Xingwang, QIAN Guofu, WEI Zhipeng
    Journal of library and information science in agriculture. https://doi.org/10.13998/j.cnki.issn1002-1248.25-0716
    Accepted: 2026-02-10

    [Purpose/Significance] With a new round of technological innovation and industrial transformation represented by artificial intelligence, the strategic value of data as a key production factor has become increasingly prominent. Data have risen to become an important driving force for reshaping national competition and driving economic growth. Therefore, analyzing the construction plan of the UK's National Data Library (NDL) can provide a useful reference and insight for the development of China's data factor market and the high-quality development of China's data industry. [Method/Process] The UK NDL construction project is an initiative promoted by the Department for Science, Innovation and Technology (DSIT) of the UK government, aimed at building a "Great British Data Library" for the era of artificial intelligence, and establishing a national-level data infrastructure and AI data facility for cross-government, cross-sector, and cross-department data sharing. Based on an investigative analysis of the UK NDL construction plan, this article examines the origins, goals, steps, and challenges of the NDL construction, compares relevant planning documents with China's policies and measures regarding data elements, and further explains the key implications for China from four aspects: top-level design, implementation operations, value sharing, and ecosystem. [Results/Conclusions] The UK's NDL construction plan offers a deeper insight into the development of China's data element market because its focus is shifting from the physical "aggregation of data resources" to the systematic "construction of a data ecosystem". The UK's NDL construction has a strong economic and instrumental character. Its core goal is to leverage public data sharing to gain innovative returns and economic growth for private enterprises. In contrast, China places more emphasis on the empowerment of industry, technology, and society, stressing the role of data elements in driving the transformation and upgrading of various industries, serving broader economic development and the modernization of social governance. In building a national data infrastructure, China should regard the cultivation and construction of a data ecosystem as a systematic social project, establishing a multi-stakeholder data ecosystem involving government, industry, academia, and the public. The high-quality development of the national data industry and the construction of a data element market require us to maintain clarity and determination in top-level design, flexibility and pragmatism in implementation, fairness and innovation in value sharing, and ultimately inclusiveness and trust within the ecosystem. China possesses more abundant data resources, a more complete data environment, stronger social organizational capacity, and more comprehensive digital infrastructure. If it continues to innovate in areas such as a scientifically and reliably structured data element market, refined data governance frameworks, flexible and inclusive data regulatory environments, and healthy and sustainable data ecosystems, China will be able to more safely and efficiently realize the diffusion effects of data element value, forming a uniquely Chinese paradigm in the global competition of data governance.

  • LIANG Xiaodong, WANG Ru, WANG Shuaijin, XU Dongmei
    Journal of library and information science in agriculture. https://doi.org/10.13998/j.cnki.issn1002-1248.25-0655
    Accepted: 2026-02-03

    [Purpose/Significance] Unleashing the consumption potential of rural residents plays a pivotal role in expanding domestic demand and cultivating new economic growth points. The digital economy, driven by data elements as the core force, is gradually becoming a key engine to activate the consumption potential of urban and rural areas in China and promote consumption upgrading. National Big Data Comprehensive Pilot Zones (NBDCPZs), with their "agglomeration of data elements, cross-domain collaborative empowerment, and precise service matching", continuously meet the personalized and diversified consumption demands of rural residents, and have unique value in unleashing the consumption potential of rural residents. [Method/Process] After conducting a theoretical analysis of the impact of the NBDCPZs on the consumption potential of rural residents, this study formulates corresponding research hypotheses. This study uses data from the China Family Panel Studies (CFPS) from 2010 to 2022 and considers the "National Big Data Comprehensive Pilot Zones" policy as a quasi-natural experiment. On the basis of measuring rural residents' consumption potential using the propensity score matching (PSM) method, the difference-in-differences (DID) method is employed to evaluate the impact of NBDCPZs construction on rural residents' consumption potential. [Results/Conclusions] The research findings are as follows: 1) After balancing the endowment characteristics of urban and rural households via the PSM method, the per capita consumption expenditure of rural residents was found to be 2 255.23 yuan less than that of urban residents. This indicates that rural areas still have enormous untapped consumption potential. 2) The construction of NBDCPZs significantly promotes the release of rural residents' consumption potential, and this conclusion remains robust after undergoing the parallel trend test, placebo test, counterfactual test, addition of fixed effects, and exclusion of the impacts of other policies. 3) An analysis of heterogeneity across sample household and regional characteristics reveals that the effect of NBDCPZs construction on unlocking rural residents' consumption potential is particularly prominent in eastern China, and is more salient in rural households with a male household head, low income, and middle-aged composition. 4)The mechanism of action indicates that the "National Big Data Comprehensive Pilot Zones" policy releases the consumption potential of rural residents by increasing their income levels and enhancing technological progress in rural areas. Furthermore, household debt exerts a positive moderating effect on the process of releasing rural residents' consumption potential through the construction of the National Big Data Comprehensive Pilot Zones. Based on the research conclusions, the following countermeasures and suggestions are put forward: 1) Advance the differentiated layout and integrated application of rural digital infrastructure; 2) Establish a long-term mechanism for enhancing rural residents' digital literacy; 3) Optimize the income increase system for rural residents and consolidate the foundation for consumption upgrading.

  • ZHANG Ling
    Journal of library and information science in agriculture. https://doi.org/10.13998/j.cnki.issn1002-1248.25-0683
    Accepted: 2026-02-03

    [Purpose/Significance] This study aims to systematically examine the application of simulation modeling in bibliometrics and to clarify its methodological position within the broader framework of digital humanities tools and agricultural knowledge services. In particular, the paper highlights the innovative potential of integrating simulation modeling with generative artificial intelligence, which enables more flexible representation of heterogeneous behaviors and context-dependent decision-making processes. By bridging bibliometrics, digital humanities tools, and agricultural knowledge services, this research contributes to the theoretical advancement of bibliometric methodology and provides a structured foundation for future applications in agricultural information practice. [Method/Process] This study adopts a systematic literature-based analytical approach to review and synthesize major simulation modeling methods applied in bibliometrics. The analysis covers several representative categories of simulation models, including dynamic modeling of classical bibliometric laws, evolution models of co-authorship and citation networks, multi-agent-based simulation, information and knowledge diffusion models, and evolutionary game-theoretic models. These methods are examined with respect to their modeling objects, underlying assumptions, key parameters, and analytical capabilities. Rather than organizing the review solely by research topics, this study emphasizes simulation modeling logic as the central analytical thread. Each category of simulation method is analyzed in terms of how micro-level rules and interactions generate macro-level bibliometric patterns. Particular attention is paid to the role of digital humanities tools in operationalizing these models, especially through visualization, system integration, and interactive simulation environments that facilitate exploration and interpretation. In addition, this study introduces recent advances in generative artificial intelligence, particularly large language model-based agents, as an extension of traditional multi-agent simulation. By incorporating generative AI into simulation frameworks, it becomes possible to model heterogeneous agents with richer cognitive representations, adaptive behaviors, and contextual reasoning abilities. The methodological discussion draws on theoretical foundations from bibliometrics, complex systems, and computational social science, while also considering practical constraints related to data availability, model calibration, and validation. [Results/Conclusions] The analysis demonstrates that simulation modeling significantly enhances the explanatory power of bibliometric research by revealing dynamic mechanisms behind literature growth, collaboration structures, and knowledge diffusion processes. Compared with traditional static indicators, simulation-based approaches provide deeper insights into how bibliometric patterns emerge and evolve over time. The integration of generative artificial intelligence further expands this capability by enabling more realistic modeling of behavioral heterogeneity and context-sensitive decision-making among research actors. From an application perspective, the study shows that simulation models and associated digital humanities tools can be effectively embedded into agricultural knowledge service workflows. These applications include research evaluation, scientific information services, and policy communication, where simulation-based scenario analysis can support strategic planning and decision-making. At the same time, the study identifies several challenges, including data quality constraints, computational costs, and issues related to model interpretability and transparency. The findings suggest that future research should focus on improving data integration, enhancing model validation strategies, and further exploring the integration of generative AI to support more adaptive and explainable simulation systems. By doing so, simulation-based bibliometrics can play a more substantial role in advancing agricultural information services and research management in complex, data-intensive environments.

  • LYULucheng, ZHOUJian, SUNWenjun, ZHAOYajuan, HANTao
    Journal of library and information science in agriculture. https://doi.org/10.13998/j.cnki.issn1002-1248.25-0672
    Accepted: 2026-01-23

    [Purpose/Significance] The use of large language models (LLMs) for patent text mining has become a major research topic in recent years. However, existing studies mainly focus on the application of LLMs to specific tasks, and there is a lack of systematic evaluation of the application effects of fine-tuned LLMs across multiple scenarios. To address this problem, this study takes ChatGLM, an open-source LLM that supports local fine-tuning, as an example. We conduct a comparative evaluation of three types of patent text mining tasks-technical term extraction, patent text generation, and automatic patent classification-under a unified experimental framework. The performance of fine-tuned models is compared from six aspects: different training data sizes, different numbers of training epochs, different prompts, different prefix lengths, different datasets, and single-task versus multi-task fine-tuning. [Method/Process] This study was based on an open-source LLM and carried out fine-tuning research for specific patent tasks in order to clarify the impact of different fine-tuning strategies on the performance of LLMs in patent tasks. Considering task adaptability, model size, inference efficiency, and resource consumption, ChatGLM-6B-int4 was selected as the base model, and P-Tuning V2 was adopted as the fine-tuning method. Three categories of patent tasks are included: extraction, generation, and classification. The extraction task is patent keyword extraction. The generation tasks include: 1) innovation point generation; 2) abstract generation based on a given title; 3) rewriting an existing title; 4) rewriting an existing abstract; 5) generating novelty points based on an existing abstract; 6) generating patent advantages based on an existing abstract; and 7) generating patent application scenarios based on an existing abstract. Six experimental comparison dimensions are designed: 1) different training data sizes; 2) different numbers of training epochs; 3) different datasets with the same data size; 4) different prompts under the same task and data; 5) different P-Tuning V2 prefix lengths with the same training data; and 6) single-task fine-tuning versus multi-task fine-tuning. Two type of evaluation metrics were used. For extraction and generation tasks, the BLEU metric based on n-gram string matching was adopted. For classification tasks, accuracy, recall, and F1 score were used. [Results/Conclusions] Based on the fine-tuning results, several conclusions were obtained. First, a larger training data size does not always lead to better performance. Second, the appropriate number of training epochs depends on the data size. Third, under the same data distribution, different data subsets have limited influence on performance. Fourth, under the same task and dataset, different prompts have little impact on model performance. Fifth, the optimal prefix length is closely related to the training data size. Sixth, for a specific task, single-task fine-tuning performs better than multi-task fine-tuning. These conclusions provide reference and guidance for fine-tuning LLMs in practical patent information work.

  • GUOHailing, ZENGMeiyun, FENGYuxi
    Journal of library and information science in agriculture. https://doi.org/10.13998/j.cnki.issn1002-1248.25-0568
    Accepted: 2026-01-22

    [Purpose/Significance] Against the backdrop of national innovation-driven development strategies and the pressing need to enhance the efficiency with which scientific and technological achievements are transformed within universities, university libraries are undergoing a critical transition. They are shifting from being traditional, passive information providers to becoming proactive, embedded partners in the research and innovation value chain. However, this transition is often hampered by inherent limitations in traditional service models. This study, therefore, posits artificial intelligence (AI) as a pivotal enabler and investigates the specific mechanisms through which AI technologies can empower university libraries to achieve deep, systemic integration into the entire lifecycle of technology transfer. The research aims to provide a comprehensive theoretical framework for understanding this transformation and offer actionable, evidence-based practical pathways for academic libraries to redefine their functional boundaries and substantially strengthen the institutional support ecosystem for university technology transfer. [Method/Process] This research employs a qualitative multi-case study design, underpinned by an analytical framework constructed around the four critical, sequential stages of the technology transfer lifecycle: 1) research topic selection and project initiation, 2) research and development, 3) project conclusion and evaluation, and 4) marketization and industrialization of outcomes. Case selection followed purposive sampling criteria to ensure representation across diverse contexts, including domestic and international universities, as well as varied library types. The primary data comprised detailed case descriptions from published academic literature, institutional reports, and official service platforms. Within this staged framework, the analysis focuses on two intertwined dimensions at each phase: the evolution of the library's core service functions and the transformative impact of AI empowerment. Through a comparative cross-case analysis, this study examines how specific AI technologies augment traditional services, fundamentally changing the role and value proposition of libraries. [Results/Conclusions] The results show that through intelligent information analysis, knowledge association, data mining, and precise matching, AI can promote university libraries to shift from resource supply-oriented support to collaborative services that run through the entire lifecycle of technology transfer. This transformation manifests across the four-stage lifecycle as a shift: from providing literature to forecasting opportunities at the initiation phase; from offering patent data to navigating R&D pathways and risks during development; from archiving outputs to assessing value and potential at conclusion; and from disseminating information to intelligently brokering industry partnerships at the commercialization phase. Synthesizing these stage-specific transformations, this study constructs a novel, integrated service framework. This framework explicitly links specific AI capabilities with the redefined core functions of the library at each stage, illustrating the transition from a linear support model to a dynamic, AI-augmented ecosystem wherein the library serves as a central intelligence node. Meanwhile, this study reveals practical challenges in current practices, including ambiguous organizational boundaries, insufficient professional capabilities, and imperfect evaluation mechanisms oriented toward technology transfer. Correspondingly, it proposes strategies such as clarifying collaborative positioning, strengthening the construction of AI-empowered service capabilities, and improving technology transfer-oriented evaluation mechanisms to promote the sustainable development of AI-empowered research services in university libraries.

  • SONGLingling, ZHANGXinghui
    Journal of library and information science in agriculture. https://doi.org/10.13998/j.cnki.issn1002-1248.25-0524
    Accepted: 2026-01-21

    [Purpose/Significance] This study investigates the operational practices and strategic development pathways of intelligent consultation services in libraries globally, specifically under the impetus of artificial intelligence (AI) large language models (LLMs). By conducting a systematic analysis of representative case studies, we examine the applied technologies, emerging service models, and measurable efficacy of these AI-enhanced services. The research holds significance in offering actionable insights for the effective implementation of AI within the library sector. It aims to guide the evolution of intelligent consultation toward greater innovation and cultural-contextual adaptability, thereby providing both theoretical underpinning and practical guidance for the localized development of smart library ecosystems. [Method/Process] Employing a comparative case study methodology, this research selected 30 representative libraries from diverse international and domestic contexts as its subjects. Data were primarily gathered through structured online surveys and content analysis of publicly available service interfaces, systematically capturing the scope, functionality, and operational status of their intelligent consultation services. The analysis focused on characterizing technological applications-identifying core LLM integrations, typical functionalities, and architectural highlights. It further integrated findings to compare and contrast prevailing service models and implementation variances. Subsequently, the study conducted a multidimensional comparative assessment of the practical service effectiveness enabled by AI large models, evaluating performance across four key areas: service response efficiency and accuracy; capabilities in resource organization and structured knowledge management; tangible improvements in user service experience; and degree of service model innovation. [Results/Conclusions] The findings indicate that AI large model-driven intelligent consulting services exhibit pronounced advantages in key operational metrics, including enhanced response efficiency, superior knowledge synthesis and management capabilities, enriched user interaction experiences, and the facilitation of novel service paradigms. However, a comparative analysis reveals significant disparities among libraries concerning the depth of technological integration, the sophistication of service offerings, and the level of cultural and linguistic adaptation achieved. In response, the study proposes targeted strategic recommendations from three interrelated perspectives: nuanced technological application, user-centered service design, and collaborative ecosystem construction. It advocates for libraries to prioritize the synergistic balance between technological capability and humanistic service values, to achieve deeper integration with localized and institutional knowledge repositories, and to institute mechanisms for continuous service evaluation and iterative optimization. These approaches are essential for fostering more efficient, inclusive, and sustainable development of intelligent consultation services. Future research directions should encompass longitudinal studies on service effectiveness, the integration of multimodal interactive capabilities, and the formulation of ethical guidelines and governance frameworks for AI deployment in library services.

  • GUO Jinbo
    Journal of library and information science in agriculture. https://doi.org/10.13998/j.cnki.issn1002-1248.25-0593
    Accepted: 2026-01-20

    [Purpose/Significance] With the rapid integration of generative artificial intelligence into library services, user trust in information has begun to exhibit a new pattern characterized by high usage, low certainty, and increased reliance on institutional guarantees. Existing studies on online credibility, artificial intelligence generated content (AIGC) applications and library innovation have mostly examined either technical performance, information literacy, or governance issues in isolation. Few have systematically analyzed how specific AIGC features, user capabilities and the institutional environment of libraries jointly shape multi dimensional user trust. This study focuses on AIGC supported services in public and academic libraries and constructs a comprehensive analytical framework linking technological signals, user ability and library based institutional mediation to the formation of cognitive, emotional and behavioral trust. The paper contributes to the refinement of trust theory in digital information environments by providing empirical evidence from a large-scale sample in China. It also offers actionable insights for libraries seeking to deploy AIGC while maintaining or enhancing their role as trusted public knowledge institutions. [Method/Process] The study is grounded in classic research on cognitive authority and online credibility, and combined with recent work on AIGC, knowledge services, information literacy and library governance. It conceptualizes user trust as a three dimensional construct comprising cognitive, emotional and behavioral components. AIGC related technological features are operationalized along three axes: perceived content quality, generation transparency and interactivity. User capability is measured through standardized digital literacy tests and indicators of professional background, while the library environment is captured by the presence of institutional arrangements such as usage guidelines, staff verification, result labelling and risk reminders. Data were collected through a large-scale questionnaire survey in ten public and academic libraries in Henan Province, yielding 2 347 valid responses. After data cleaning and reliability and validity checks, the study employed a combination of structural equation modelling, two stage least squares estimation, threshold regression, spatial autoregressive models, dynamic panel system GMM estimation, quantile regression and finite mixture models. This sequential strategy allowed for simultaneous identification of structural paths, endogenous relationships, non linear and moderating effects, spatial spillovers and temporal dependence, as well as heterogeneous trust formation patterns across user groups. [Results/Conclusions] The findings confirm that user trust in AIGC enabled library services is best understood as a three dimensional structure, in which cognitive trust influences emotional trust, and both jointly shape behavioral trust. Content quality and generation transparency exert strong and robust positive effects on cognitive trust, while interactivity mainly enhances emotional trust and indirectly affects behavioral intentions. Digital literacy and professional background introduce clear threshold and amplification effects: when user capability is below certain levels, improvements in content quality and transparency have limited impact on trust, but above these thresholds the marginal effects increase markedly. Library level institutional arrangements, including human review, explicit labelling and standardized usage rules, not only raise overall trust levels, but also significantly strengthen the effects of technological signals, sometimes to a degree comparable with individual level capability factors. Spatial and dynamic analyses show that trust exhibits both spillover and path dependence: practices in one library can influence neighbouring institutions through user mobility and word of mouth, and positive or negative experiences accumulate into longer term evaluations. The study suggests that libraries should treat trust building as a core design objective when introducing AIGC, embed transparency and quality signals into interfaces and metadata, establish robust verification and correction workflows, and provide differentiated services for users with different literacy levels and professional backgrounds. The limitations include the concentration of data in one province and the use of primarily macro-level instruments for identifying causation. Future research could extend the framework to cross regional and cross type libraries, compare specific functional scenarios such as reference services and reading promotion, and further integrate trust analysis with broader issues of library governance, literacy education and responsibility allocation in AIGC ecosystems.

  • GUO Yanli, GAO Rui, ZOU Meifeng, LIU Zidan
    Journal of library and information science in agriculture. https://doi.org/10.13998/j.cnki.issn1002-1248.25-0663
    Accepted: 2026-01-20

    [Purpose/Significance] As the user base grows, the number of online comments is increasing rapidly. The massive volume of comments has broadened the innovative thinking of enterprises and provided more diverse innovative options, but it has also brought about the problem of information overload. Therefore, in the face of the massive amount of online user comments, how to use efficient and precise methods to mine information with practical value, effectively integrate valuable information and identify product innovation opportunities, and transform it into high-quality resources for enterprise product innovation has become a hot topic of great concern in both academic and industrial circles. Against this backdrop, studying how to identify product innovation opportunities based on online reviews is of great theoretical significance and practical value. Unlike previous studies, this paper uses the BERT model to accurately filter out negative user comments and identify key demand points. This article also combines the characteristics of ordinary users and leading users, integrates dual-source data of user comments from e-commerce platforms and online communities, and associates the demand issues of ordinary users with the suggestions of leading users, which can more accurately identify product innovation opportunities. [Method/Process] First, we collected and pre-processed ordinary user comment data and leading user comment data. Second, the BERT model and LDA topic model were used to categorize the sentiment and cluster the comment data to mine the problems of ordinary users and suggestions of leading users. Finally, based on semantic similarity analysis, problem-suggestion topic mapping was realized to identify product innovation opportunities with high innovation value. [Results/Conclusions] This paper constructed a problem-suggestion product innovation opportunity identification method driven by dual-source data, and selected the action camera as a case to elaborate in detail on the specific practice of the proposed method in the field of product innovation. Through case analysis, the feasibility of the proposed method of product innovation was verified, providing an operational reference basis for enterprises on how to efficiently recommend product innovation work. However, this paper still has certain limitations and needs to be improved with more abundant data in subsequent studies. First, the data collected in this article mainly come from e-commerce platforms and online community platforms. Although this data contain a large amount of user information, there are still deficiencies. In the future, we will introduce more data sources, such as news media and technology websites to obtain more comprehensive and diverse data. Second, this paper has only conducted case application research in the field of intelligent digital products. In the future, we need to further explore more fields, such as smart wearables and whole-house intelligence, to enhance the universality of the product innovation opportunity identification framework constructed in this paper.

  • PANYong, SUNJing, WANGJiandong
    Journal of library and information science in agriculture. https://doi.org/10.13998/j.cnki.issn1002-1248.25-0664
    Accepted: 2026-01-06

    [Purpose/Significance] As data become a strategic resource in the digital economy, its quality directly affects the efficiency of value creation and the effectiveness of public governance. However, with the continuous expansion of data scale and the deepening of application scenarios, pervasive quality issues - such as inconsistencies, errors, and redundancies - have emerged as a significant bottleneck restricting the release of data element potential. High-quality public data are particularly critical for empowering government decision-making and optimizing public services. Addressing the urgent practical need for high-quality data supply, this paper relies on the public basic databases (specifically the Population Database and Legal Entity Database) of a representative city to construct a scientific, systematic, and operable data quality assessment system. The study aims to diagnose existing quality defects in these foundational assets and provide theoretical support and actionable references for relevant departments to transition from passive data management to active quality governance. [Method/Process] To ensure the assessment is both scientifically rigorous and practically applicable, this study establishes a comprehensive evaluation framework based on domestic and international research, combined with the national standard GB/T 36344-2018 and local data characteristics. The framework comprises a hierarchical structure with 6 primary indicators (Normativity, Integrity, Consistency, Accuracy, Timeliness, and Accessibility), 17 secondary indicators, and 61 specific detection items. The study employs a dual-track assessment methodology integrating automated detection tools with manual verification. Automated SQL scripts and rule engines are utilized for the large-scale quantitative detection of intrinsic dimensions, while manual checks and interviews address contextual dimensions. This methodology was applied to conduct a multi-dimensional evaluation of 1 367 data items across 102 datasets in the city, ensuring a thorough analysis of the data status. [Results/Conclusions] The evaluation results indicate that while the overall construction of the city's public basic databases is positive, multidimensional quality issues persist. Specifically, the assessment revealed problems such as data coding errors, non-standardized classification, missing data items, missing or duplicate primary keys, inconsistent formats, the presence of illegal characters or outliers, and data delays or discontinuations. To address these challenges, the paper proposes four systematic improvement strategies: 1) To unify data standards and coding systems to ensure consistency across departments; 2) To construct a full-process quality control mechanism covering data collection, storage, and usage; 3) To strengthen technical platform support by implementing real-time monitoring and intelligent warning capabilities; and 4) To improve organizational synergy and institutional guarantees to solidify the management foundation. These measures are intended to optimize data supply quality and support the support the high-quality and sustainable development of the data element market.

  • YUAN Shuo
    Journal of library and information science in agriculture. https://doi.org/10.13998/j.cnki.issn1002-1248.25-0526
    Accepted: 2025-12-31

    [Purpose/Significance] The accelerated digital transformation of public cultural services has fundamentally reshaped modes of service delivery, governance frameworks, and citizen engagement. Exploring how digital technologies empower the high-quality development of public cultural services is essential for designing a modern, equitable, and efficient service system. This study contributes to the existing literature by systematically investigating not only the direct effects of digital technologies but also threshold, regional heterogeneity, spatial spillovers, and mediating mechanisms. This clarifies how digital innovation interacts with governance capacity and institutional environments. Unlike previous research, which relied mainly on descriptive or single-method analyses, this study employs an integrated empirical framework. This framework captures the dynamic and multidimensional nature of digital empowerment within the context of public service. It enriches the theoretical and practical understanding of digital governance. [Method/Process] This study employs panel data from 31 Chinese provinces over the period 2015-2023 to systematically investigate how digital technologies influence the high-quality development of public cultural services. A combination of fixed-effects models, mediating-effects models, threshold regression models, and spatial econometric models was used to capture direct, nonlinear, regional, spatial, and mediating effects. To control for potential confounding factors, fiscal expenditure, population density, and cultural literacy were incorporated as covariates. The analysis drew on theoretical foundations conceptualizing digital technology as a new productive force and was supported by empirical data from national statistical yearbooks, digital finance indices, and governance performance indicators, ensuring both methodological rigor and contextual relevance. [Results/Conclusions] Digital technology significantly promotes the high-quality development of public cultural services, with measurable positive effects for each incremental increase in the digital technology development index. The influence exhibits a nonlinear threshold pattern, reflecting a "promotion-weakening-enhancement" trajectory, highlighting the necessity of integrating technological applications with governance structures, resource allocation, service design, and public digital literacy. Regional analyses reveal stronger effects in the central and western provinces, suggesting that digital technologies can help mitigate service disparities under supportive policy frameworks. The spatial econometric results indicate positive spillover effects on neighboring regions, while the mediation analysis identifies government governance capacity as a key mechanism through which technological inputs translate into service outcomes. Policy implications include reinforcing digital infrastructure, enhancing institutional support, implementing region-specific strategies, fostering inter-provincial coordination, and strengthening government-led service integration. The study has limitations, including the possibility of potential unobserved concurrent causal pathways, Future research should adopt configurational methods such as qualitative comparative analysis in future research to further elucidate the complex, multicausal dynamics of digital technology empowerment in public cultural services.

  • YI Chenhe, ZHANG Yuting
    Journal of library and information science in agriculture. https://doi.org/10.13998/j.cnki.issn1002-1248.25-0495
    Accepted: 2025-12-31

    [Purpose/Significance] Generative Artificial Intelligence (GAI) has rapidly reshaped the landscape of social information dissemination, bringing unprecedented network public opinion risks-such as large-scale disinformation spread, algorithmic bias-induced social inequality, extreme emotional polarization, and model hallucinations leading to cognitive deviations-that significantly amplify the complexity, suddenness, and cross-domain spillover effects of public opinion evolution. These risks not only undermine the authenticity and order of information ecosystems but also pose severe challenges to social governance, public trust, and policy-making efficiency, making accurate identification, quantitative assessment, and early warning an urgent academic and practical task. Existing research has obvious limitations: single-dimensional assessment frameworks fail to capture GAI's multi-faceted and interrelated risks, such as the concealment of generated content, algorithmic recommendation amplification and cross-platform diffusion; traditional models such as basic BP neural networks suffer from susceptibility to local optima and poor generalization, inadequately adapting to the non-linear, dynamic, and high-dimensional attributes of GAI-generated content. To address these gaps, this study constructed a 4-dimensional risk assessment index system (content, dissemination, sentiment, and user) and proposed a GA-optimized BP neural network model, which will enrich public opinion management theories in the AI era and provide practical, efficient tools for precise risk control. It will contribute to the construction of a safe, orderly, and trustworthy online space. [Method/Process] A mixed research method with solid theoretical foundations (information communication theory and intelligent optimization algorithms) and empirical support was adopted: Ten typical GAI-induced public opinion events were selected from Sina Weibo (selection criteria: views ≥1 million, original posts ≥60, covering technology, society, public affairs, and consumption fields). Following a four-stage evolutionary model (formation, outbreak, mitigation, and recovery) and four early warning levels (Level I-IV, corresponding to binary outputs 1000, 0100, 0010, 0001) as specified in national emergency management standards, samples were systematically categorized into four evolutionary stages and corresponding risk grades. A 12-indicator system covering content (authenticity, misleadingness, and professionalism), dissemination (speed, scope, and diffusion path), sentiment (intensity, polarization degree, and negative ratio), and user (influencing impact, participant activity, and interaction stickiness) dimensions was constructed. The weights of each indicator were determined to ensure objectivity, and data preprocessing was performed via min-max normalization to eliminate dimensional differences. A 4-layer BP neural network (12 input neurons, 2 hidden layers with 15 and 10 neurons respectively, and 4 output neurons) was built, with initial weights, thresholds, and hyperparameters (learning rate and iteration times) optimized by genetic algorithm (GA). A traditional BP model served as the control group, with 70% of data as the training set and 30% as the test set, and model performance was evaluated based on prediction accuracy. [Results/Conclusions] Experimental results confirm the significant superiority of the GA-BP model: its prediction accuracy reached 91.67%, 8.34 percentage points higher than the traditional BP model (83.33%). This verifies that GA optimization effectively improved model performance, enabling better capture of complex non-linear relationships among GAI-induced risk factors. The multi-dimensional index system successfully extracted core risk characteristics, realizing comprehensive identification and traceability of GAI-related public opinion risks. Limitations of this study include sample concentration on Chinese social platforms, limited case quantity, and narrow time span. Future research will expand cross-border, multi-language samples (e.g., Twitter, Facebook), enrich technical indicators (e.g., GAI content identifiability, algorithmic intervention intensity), and explore integration with deep learning models (e.g., LSTM, Transformer) to further enhance the generalizability, real-time performance, and intelligent decision-making support capabilities of the risk assessment system.

  • HUANGXiaotang, YAOQibin
    Journal of library and information science in agriculture. https://doi.org/10.13998/j.cnki.issn1002-1248.25-0590
    Accepted: 2025-12-19

    [Purpose/Significance] Under the strategic background of national cultural digitization and the high-quality development of public services, artificial intelligence generated content (AIGC) has become a core engine driving the digital and intelligent transformation of galleries, libraries, archives, and museums (GLAM). While AIGC offers unprecedented opportunities for content production and knowledge dissemination, current implementations often suffer from fragmentation, leading to new "data islands" and service barriers. Unlike previous studies, which treat GLAM institutions as a homogeneous whole, this paper aims to clarify the differentiated application paths of AIGC by distinguishing the unique "resource-technology-service" logic of each institution type. It seeks to reveal the structural causes of current collaborative dilemmas and construct a systematic collaborative development mechanism. This research is significant for breaking down institutional barriers, promoting the deep integration of cultural resources, and guiding GLAM institutions to shift from isolated technological upgrades to a clustered, symbiotic development model. [Method/Process] Adopting a digital ecosystem perspective, this study constructs a "Resource Attributes - Technology Adaptation - Service Goals" framework to systematically analyze the heterogeneous characteristics of the four institution types. The analysis reveals how distinct data morphologies - ranging from structured texts in libraries and semi-structured records in archives to multimodal artifacts in museums and unstructured works in art galleries - fundamentally dictate the differentiated deployment of generative text or vision models. By examining core capabilities including intelligent content twinning, editing, and creation, the study demonstrates how service goals strictly regulate technical choices: the emphasis on "access" and "trust" in libraries and archives necessitates technologies that ensure semantic accuracy and historical authenticity, whereas the pursuit of "experience" and "creativity" in museums and art galleries favors generative tools for immersive interaction and open-ended aesthetic expression. [Results/Conclusions] To address the identified challenges of fragmented development, the study proposes a tripartite collaborative development architecture consisting of a "Front-end Resource Layer," a "Mid-platform Technology Layer," and an "End-user Service Layer." The Front-end Resource Layer focuses on constructing a unified multimodal data foundation and standardized semantic ontology to bridge the semantic gap between heterogeneous institutional data. The Mid-platform Technology Layer advocates for the co-construction of an industry-specific general large model and a knowledge reasoning engine; by sharing API interfaces and computing power, this layer solves the high technical threshold and cost issues for smaller institutions, acting as a ubiquitous "industry capability hub." The End-user Service Layer aims to build a one-stop knowledge exploration portal and cross-domain expert workbenches, eliminating service isolation and creating integrated cultural scenarios. The study concludes that GLAM institutions must transition from "cultural containers" to "knowledge engines" through this architecture. Future research should further focus on copyright ethics, algorithmic governance, and new modes of human-machine collaboration to ensure the sustainable and trustworthy development of the digital cultural community.

  • LIDan, FENGDanran
    Journal of library and information science in agriculture. https://doi.org/10.13998/j.cnki.issn1002-1248.25-0493
    Accepted: 2025-12-19

    [Purpose/Significance] Against the backdrop of intensifying global technological competition and the drive for scientific and technological progress under national innovation strategies, generative artificial intelligence (AI) technology, as an emerging disruptive technology, has had a profound impact on the economy and society through its widespread application. However, the diffusion of this technology in the market still faces numerous challenges. This paper aims to delve into the micro-level decision-making factors influencing enterprises' research and development (R&D) of generative AI technology, as well as the specific impact of user group interactions on the effectiveness of technology diffusion, by constructing a complex network evolutionary game model. The research seeks to uncover the inherent laws governing technology diffusion, providing a scientific basis for policymakers and corporate practitioners to promote the healthy development and effective diffusion of generative AI technology, thereby fostering comprehensive socio-economic progress. [Method/Process] This paper adopts the complex network evolutionary game model as the primary research method, integrating complex network theory, technological innovation diffusion theory, and social influence theory to construct a game model for corporate decision-making regarding generative AI technology. By incorporating the structural characteristics of complex networks and the dynamic mechanisms of evolutionary games, the study simulates the R&D decision-making processes of enterprises under varying conditions of user adoption rates, government subsidy levels, differences in technology benefits and costs, and technology spillover effects. Simultaneously, numerical simulation analysis is employed to explore the specific impacts of changes in these factors on the diffusion effectiveness of generative AI technology decisions, thereby thoroughly revealing the micro-mechanisms underlying technology diffusion. [Results/Conclusions] The research results indicate that an increase in user adoption rates significantly and positively drives the diffusion of generative AI technology, with moderate user dependency behaviors further accelerating this process. Government subsidies play a particularly prominent role in promoting technology diffusion when user adoption rates and the initial proportion of enterprises choosing R&D strategies in the network are low. However, as these proportions rise, the marginal effect of subsidies gradually diminishes. The difference in benefits between enterprises that develop generative AI technology and those that do not has a marked impact on technology diffusion, whereas the impact of cost differences is relatively minor. Furthermore, the spillover effects of generative AI technology may induce free-rider behaviors among other enterprises, hindering technology diffusion. Additionally, when the maturity level of generative AI technology is low, it reduces user trust in the technology, thereby inhibiting its widespread dissemination. Based on these conclusions, this paper proposes policy recommendations such as encouraging user participation, flexibly adjusting subsidy policies, enhancing technology maturity, and establishing intellectual property laws and regulations to facilitate the effective diffusion of generative AI technology.

  • HEYing, SUNWei, LIZhoujing, MAXiaomin
    Journal of library and information science in agriculture. https://doi.org/10.13998/j.cnki.issn1002-1248
    Accepted: 2025-12-12

    Purpose/Significance The formulation of evidence-based science and technology policy critically relies on the timely and accurate provision of relevant, high-quality evidence. However, current evidence recommendation practices often suffer from significant limitations in both accuracy and efficiency, hindering the scientific rigor and intelligent application of evidence within the policy-making process. These shortcomings hinder policymakers' ability to leverage the most pertinent research and data, potentially leading to suboptimal decisions. Addressing this critical gap, this research proposes a novel knowledge graph-based evidence recommendation method. The primary objective is to substantially enhance the scientific foundation and intelligent capabilities of evidence utilization during policy formulation. This method aims to empower policymakers by providing more reliable, contextually relevant, and efficiently retrieved data support. Ultimately this will foster more robust, transparent, and demonstrably effective science and technology policies grounded in comprehensive research insights. Method/Process To achieve these objectives, this study systematically constructs a domain-specific knowledge graph meticulously centered on the intricate citation relationships between policy documents and academic research papers. This graph serves as the foundational semantic network representing entities (policies, articles, topics, authors, institute etc.) and their multifaceted interconnections. Most importantly, we introduce and adapt the Knowledge Graph Attention Network (KGAT) algorithm n an innovative way. Leveraging KGAT's sophisticated graph attention mechanisms, our model effectively captures and learns complex, high-order semantic relationships between policy requirements (represented as queries or specific nodes) and potential evidence sources (research paper nodes). This deep relational understanding enables nuanced evidence relevance scoring and personalized recommendation. To rigorously validate the proposed method's practical efficacy and performance, we conducted an extensive empirical study within the specific domain of agricultural science and technology policy. Furthermore, to demonstrate real-world applicability and provide a tangible tool for policymakers, we designed and implemented a fully functional Evidence Intelligent Recommendation System (EIRS). This system seamlessly integrates the core KG-based recommendation engine and incorporates advanced intelligent analysis capabilities. Significantly, EIRS supports an end-to-end workflow initiated by natural language policy questions posed by users, enabling intuitive interaction and precise, demand-driven evidence retrieval and recommendation. [Results/ Conclusions Experimental results, conducted on real-world datasets within the agricultural science and technology policy domain, demonstrate the superior performance of the proposed KGAT-based recommendation method. It consistently outperforms several state-of-the-art baseline algorithms across multiple key evaluation metrics, including precision, recall, normalized discounted cumulative gain (NDCG), and mean reciprocal rank (MRR). This quantitatively confirms its significantly stronger recommendation capability. In addition to quantitative metrics, the model inherently offers enhanced explainability due to the transparent nature of the knowledge graph structure and the attention weights learned by KGAT, allowing for insights into why specific evidence is recommended, based on its semantic connections to the policy query. Concurrently, the implemented EIRS has proven to be highly effective in practice. It efficiently identifies and recommends evidence resources exhibiting a strong match with complex policy requirements expressed in natural language. The system's successful deployment underscores its potential to tangibly augment the scientific underpinning of science and technology policy development. By effectively bridging the gap between vast research knowledge and specific policy needs through intelligent, accurate, and explainable recommendations, this research provides a novel, practical pathway towards realizing truly intelligent and rigorously evidence-based policy formulation processes. The methodology and system prototype offer a valuable and adaptable framework for various policy domains beyond the presented case study.

  • MAOKaiyan
    Journal of library and information science in agriculture. https://doi.org/10.13998/j.cnki.issn1002-1248.25-0429
    Accepted: 2025-12-02

    [Purpose/Significance] Chinese classical texts are central to preserving and transmitting traditional culture; however, promoting them among children has long faced many obstacles: the linguistic barrier posed by classical Chinese, the cognitive distance caused by cultural discontinuity, and the limitations of static and monotonous promotional forms. These challenges have often resulted in low levels of engagement and comprehension among young readers. The recent emergence of Sora-type video generation models, characterized by their ability to produce coherent long-form narratives, integrate multimodal information, and simulate spatially consistent scenes, opens up new opportunities for bridging this gap. This study aims to investigate how such models can be effectively employed in the promotion of Chinese classics among children, to evaluate their potential benefits and inherent risks, and to develop practical strategies that align technological capabilities with educational and cultural objectives. [Method/Process] This research adopts a combined approach of literature review, case study, and comparative analysis. First, it reviews existing literature on the application of artificial intelligence in reading promotion, highlighting current achievements and limitations. Second, it uses representative Chinese classics, including Shan Hai Jing, Strange Tales from a Chinese Studio (Liaozhai Zhiyi), and The Book of Songs (Shijing), to examine how Sora-generated videos function in different promotional contexts. Third, it constructs an analytical framework based on three interrelated dimensions: scenes, content, and approaches. Within this framework, the study identifies opportunities, delineates challenges, and proposes targeted countermeasures. [Results/Conclusions] Sora-type video generation can substantially enhance the promotion of Chinese classics among children. At the scene level, it allows traditional spaces to be extended into immersive and hybrid environments, thereby broadening access beyond classrooms and libraries. At the content level, it transforms abstract imagery and complex narratives into visual forms, reducing cognitive barriers and accommodating differentiated learning needs. At the approach level, it facilitates text-image complementarity, cross-media integration, and personalized recommendations, thereby strengthening engagement and sustaining reading motivation. However, the study also cautions against significant risks. These include the mismatch between generated content and specific promotional settings, the danger of oversimplification or distortion of classical texts, and the over-reliance on audiovisual materials that might undermine children's ability to engage in deep textual reading. To address these risks, the article proposes a threefold strategy: differentiated scene design, content transformation with cultural fidelity, and complementary pathways that ensure children transition from video to text.

  • HU Anqi
    Journal of library and information science in agriculture. https://doi.org/10.13998/j.cnki.issn1002-1248.25-0448
    Accepted: 2025-11-28

    [Purpose/Significance] The rapid proliferation of generative artificial intelligence (AI), exemplified by models like DeepSeek-R1, has precipitated a paradigm shift across various sectors, positioning AI literacy as an indispensable competency for the future workforce. University students, as digital natives and pivotal agents of technological adoption and innovation, stand at the forefront of this transformation. Their proficiency in understanding, utilizing, and critically evaluating AI technologies directly influences their academic performance, research capabilities, and long-term career adaptability. Although existing literature has begun to explore the conceptual landscape of AI literacy, a significant gap remains. There is an absence of a robust, empirically validated competency framework specifically tailored to the unique learning contexts, developmental needs, and future roles of university students within China's higher education system. This study aims to address this critical gap by constructing and validating a comprehensive AI literacy competency framework for college students. Its primary significance lies in its ability to move beyond theoretical discourse and provide an evidence-based model that can guide the systematical development of targeted training programs. This enriches the theoretical underpinnings of AI literacy education and offers practical guidance for cultivating high-quality talent equipped for the intelligent era. [Method/Process] This research employed a mixed-methods approach, integrating qualitative and quantitative methods to provide both theoretical grounding and empirical robustness. The study commenced with a qualitative phase utilizing the grounded theory methodology. A systematic analysis of 112 core academic publications (2019-2024) from databases such as CNKI and Web of Science was conducted. Through a rigorous process of open coding, axial coding, and selective coding, facilitated by NVivo11 software, we extracted 300 initial concepts, which were subsequently synthesized into 26 sub-categories and ultimately 4 main categories. This process resulted in the preliminary construction of a four-dimensional AI literacy competency framework. Following this, a quantitative phase was implemented to test and refine the framework. A detailed questionnaire was developed based on the identified dimensions and indicators. Utilizing a five-point Likert scale, the questionnaire measured 26 variables corresponding to the framework's sub-components. A total of 586 valid responses were collected from undergraduate students across universities in Jiangsu Province, China. The dataset was randomly split into two halves. The first subset (N=293) underwent exploratory factor analysis (EFA) using SPSS to uncover the underlying factor structure and assess the internal consistency reliability via Cronbach's alpha. The second subset (N=293) was subjected to confirmatory factor analysis (CFA) using AMOS to verify the hypothesized factor structure, evaluate model fit indices (e.g., CMIN/DF, CFI, TLI, RMSEA), and establish convergent and discriminant validity by examining average variance extracted (AVE) and composite reliability (CR). [Results/Conclusions] The empirical analyses strongly support the validity and reliability of the proposed competency framework. The EFA clearly identified four distinct factors that aligned perfectly with the predefined dimensions, with a total variance explained of 69.916% and all factor loadings exceeding 0.6. The CFA results demonstrated excellent model fit (CMIN/DF=1.921, CFI=0.950, TLI=0.943, RMSEA=0.056), confirming the structural integrity of the framework. Furthermore, all constructs exhibited high internal consistency (Cronbach's α>0.90) and satisfactory convergent (AVE>0.5, CR>0.7) and discriminant validity. The finalized framework, therefore, comprises four interconnected core dimensions: AI Cognition (encompassing knowledge of basic concepts, applications, value, and risks), AI Skills (covering practical abilities from tool usage and programming to critical evaluation and innovation), AI Ethics (emphasizing social responsibility, privacy, intellectual property, and legal compliance), and AI Thinking (fostering higher-order cognitive abilities like computational, critical, and systemic thinking). Based on this validated framework, the study proposes a systematic and multi-faceted training system. This system outlines clear training objectives, identifies key stakeholders (e.g., university libraries, teaching centers, schools, and external enterprises), designs layered training content and pathways corresponding to each dimension, and suggests implementation strategies focusing on faculty development, a comprehensive assessment and feedback mechanism, and the strategic integration of AI-related resources. The main limitation of this study is that the respondents of the questionnaire were primarily college students during the empirical test stage. Future research can include teachers, business employers, and AI experts to modify and improve the index weight and content of the competency framework from multiple perspectives. This can be done through the Delphi method, expert interviews, and other methods, so as to enhance the framework's authority and universality.

  • RENFubing, LUOYa
    Journal of library and information science in agriculture. https://doi.org/10.13998/j.cnki.issn1002-1248.25-0326
    Accepted: 2025-10-20

    [Purpose/Significance] In the era of widespread social media, network cluster behavior has emerged as a significant phenomenon that shapes online public opinion and collective action. Although existing research has thoroughly examined macro-level drivers and developed evolutionary stage models for network cluster behavior, there is still a significant gap in our understanding of the micro-level cognitive mechanisms that dynamically propel its evolution. Cognitive biases, which are inherent tendencies in human cognition, are amplified in online group interactions. This study specifically addresses this gap by adopting a cognitive bias perspective to investigate the evolution mechanism of network cluster behavior. It is crucial to focus on campus hot events as highly relevant and sensitive case studies. These events often involve students, parents, educational institutions, and the wider public, covering core issues such as campus safety, management disputes, teacher-student relations, and student rights. Their inherent emotional resonance, rapid dissemination within specific online communities, and potential for severe damage to reputation and social order necessitate deeper understanding. The core innovation and significance of this research lie in: 1) Systematically integrating cognitive bias theory to analyze the complete lifecycle evolution of network cluster behavior in campus events; 2) Empirically revealing how specific biases dynamically manifest and interact at various stages, shaping the trajectory of network cluster behavior; 3) Providing a richer theoretical framework for network cluster action theory; 4) Offering empirical evidence for formulating targeted governance strategies to mitigate risks associated with campus-related online crises, thereby promoting constructive online discourse and campus stability. [Method/Process] To rigorously investigate the core research question, this study employed the grounded theory methodology. Based on sustained high popularity rankings on the "Zhiwei Shijian" platform, ten representative campus hot events were systematically selected to ensure coverage of diverse campus issues. Extensive datasets of user comments related to these ten events were collected from the Sina Weibo platform, serving as the core empirical foundation. The data collection timeframe spanned the complete lifecycle of each event, from initial emergence to eventual subsidence. Following the grounded theory process, the collected textual data underwent a meticulous three-stage coding procedure to induce and refine textual themes. Through this process, facilitated by qualitative data analysis software, a substantive theoretical model was ultimately constructed. This model delineates the evolutionary path and internal mechanisms of network cluster behavior in campus events under the influence of cognitive biases. The grounded theory method was deemed highly appropriate due to its capacity for deeply exploring complex social processes and emergent phenomena directly from rich, context-specific data. [Results/Conclusions] The study found that the evolution mechanism of network cluster behavior in the context of campus hot topics mainly consists of five stages: public opinion induction, public opinion bias, public opinion diffusion, public opinion outbreak, and public opinion subsidence. Based on these findings, governance strategies for such campus network events have been proposed, including identifying triggering factors, avoiding cognitive biases, enhancing user literacy, promoting collaborative guidance, and mitigating secondary risks.

  • CHIYuzhuo, ZHANGBing
    Journal of library and information science in agriculture. https://doi.org/10.13998/j.cnki.issn1002-1248.25-0348
    Accepted: 2025-10-17

    [Purpose/Significance] Open scientific data policies play a pivotal role in promoting the open sharing, unrestricted access to, and reuse of scientific data, thereby enhancing research efficiency and driving innovation. Despite their significance, research on the diffusion of these policies has predominantly focused on policy formulation, often neglecting the critical aspect of policy adoption and implementation at the local government level. This study aims to addres this gap by comprehensively examining the factors that influence the adoption of open scientific data policies by prefecture-level governments in China. The research was motivated by the need to understand how these policies spread across different regions, as well as the underlying mechanisms that facilitate or hinder their adoption. In doing so, the study expands the existing knowledge base by shedding light on the dynamics of policy diffusion in the context of open scientific data, a relatively under-explored area compared to other policy domains. [Method/Process] To achieve its objectives, the study employed an integrated research methodology. First, it utilized a policy diffusion model, adapted from the well-established Berry model, to theoretically frame the research. This model was enhanced by incorporating insights from a comprehensive literature review, which helps identify key internal and external factors influencing policy diffusion. Second, the study employed the event-history analysis to empirically test these factors using data from 286 Chinese cities over the period from 2018 to 2022. This method allows for the examination of the temporal sequence of policy adoption and the identification of causal relationships between the influencing factors and policy diffusion. Finally, a fuzzy-set qualitative comparative analysis (fsQCA) was applied to refine the understanding of multiple causal configurations that lead to successful policy adoption. This approach captures the complexity and interdependence of factors in policy diffusion processes, offering a nuanced perspective that goes beyond traditional statistical methods. [Results/ [Conclusions] The study identified four primary pathways for the diffusion of open scientific data policies in China: resource-driven, organization-and-human-capital-led, multi-stakeholder collaborative, and technology-guided. The resource-driven pathway emphasizes the significance of research funding and the establishment of professional organizations in facilitating policy adoption. The organization-and-human-capital-led pathway highlights the role of government official mobility and a skilled workforce in driving policy diffusion. The multi-stakeholder collaborative pathway underscores the importance of coordinated efforts among various stakeholders, including government agencies, research institutions, and industry partners. Last, the technology-guided pathway focuses on innovation capacity and professional management as key drivers of policy adoption. The findings reveal a heavy reliance on administrative measures in driving policy diffusion, which may lead to unintended consequences such as policy sustainability issues and a lack of alignment with local needs. Therefore, local governments are encouraged to adopt tailored diffusion strategies that consider their specific contexts and resource endowments. Future research should explore the performance of these policies in achieving their intended outcomes and conduct comparative studies across different regions to enhance the generalizability of the findings.

  • JIANGJingze, ZHOUTianmin, LIMei, CHENGCheng, CHENHaiyan
    Journal of library and information science in agriculture. https://doi.org/10.13998/j.cnki.issn1002-1248.25-0289
    Accepted: 2025-10-17

    [Purpose/Significance] With the rapid advancement of artificial intelligence (AI), university libraries are undergoing a deep transformation from traditional resource repositories to intelligent service ecosystems. This transformation poses a significant challenge to the conventional competencies of librarians and underscores the necessity for a systematic reconstruction of these competencies. Existing studies often lack empirically supported and integrative models, and they seldom bridge the gap between AI application and competence development. To address these shortcomings, this study proposes a core competence model for hybrid AI librarians, integrating technical, service, and management dimensions. The research highlights its innovation by not only theorizing but also empirically validating the model through grounded data, positioning the study as a meaningful contribution to the discourse on digital librarianship. Different from previous literature, it integrates AI platform practices within the competency framework. This integration serves to enrich both theoretical underpinnings and enhance the practical applicability of the theory. This provides actionable implications for the sustainable development of librarianship in the context of national strategies for digital transformation and technological innovation. [Method/Process] The study employed a mixed-methods approach. First, a literature review was conducted to analyze trends in AI applications within university libraries. Then, semi-structured in-depth interviews were carried out with ten librarians from multiple universities that have deployed the DeepSeek intelligent platform. The participants covered technical, service, and management positions, with more than three years of experience using AI tools and a distribution across middle to senior professional titles. Following data collection, the grounded theory was applied with three levels of coding (open, axial, and selective) to inductively derive categories and explore how technical, service, and management competencies interact. The principle of data saturation was strictly observed to ensure methodological rigor, and no additional categories emerged after the three competency domains were established. [Results/ [Conclusions] Findings indicate that the core competencies of hybrid AI librarians revolve around three interdependent domains. Technical competence involves intelligent tool operation, data analysis, and system maintenance, supporting the integration of AI into daily workflows. Service competence emphasizes user-centered design, personalized recommendation, and human-AI collaborative interaction, ensuring that technical functions translate into user value. Management competence addresses resource allocation, cross-department collaboration, and ethical governance, safeguarding sustainability, compliance, and innovation. Together, these dimensions form a "technology-service-management" dynamic balance model, characterized by reinforcing loops in which technology drives service, service demands managerial support, and management stabilizes technology-service integration. Furthermore, a training and cultivation framework was proposed, offering differentiated professional pathways based on librarians' roles and growth stages. The study concluded that such a model not only enhances service effectiveness but also contributes to national innovation strategies. The study's limitations include its scope, which is limited to a single country and a small sample size. Future research should expand the sample base, employ comparative studies across institutions, and further examine the weighting of competencies.

  • HUORuijuan, ZHANGHai
    Journal of library and information science in agriculture. https://doi.org/10.13998/j.cnki.issn1002-1248.25-0467
    Accepted: 2025-10-09

    [Purpose/Significance] In the current era, libraries are essential to fostering a reading-oriented society because they act as key hubs for disseminating knowledge. The goal is to increase public cultural literacy and foster an intellectual atmosphere. However, the lack of a professional framework for promoting reading in libraries severely hinders these efforts. Without clear standards, activities lack systematic planning, which leads to inefficiency and an inability to address diverse reading needs. This study systematically examines the professional services available to library reading promoters. By examining professionalization dimensions and influencing factors, it fills a research gap, enriches library science theory, and provides guidance for cultivating high-quality reading promotion teams. [Method/Process] In-depth interviews were used as the primary method to ensure research rigor. Fifteen participants were selected using purposeful sampling, including library scholars, experienced reading promoters, and front-line librarians. Each interview lasted between 50 and 70 minutes and covered the status of reading promotion, the challenges involved, and future expectations. Three stages of grounded theory analysis were then applied: open coding to extract initial concepts, axial coding to establish relationships between concepts, and selective coding to form a theoretical model. This data-driven approach validates the results. [Results/Conclusions] The research has achieved significant results by identifying three core dimensions of professionalization. For literacy specialization, reading promoters are required to have a solid grasp of library science, literature, and educational psychology. Training specialization emphasizes the establishment of a systematic training program that covers promotion skills, event planning, and user communication. A well-designed training system can continuously improve the professional capabilities of reading promoters. Reading promotion specialization focuses on adopting evidence-based and innovative strategies, which can enhance the effectiveness of reading promotion. Four influencing factors were also discovered: the curriculum system determines the content and quality of training; the resource system provides necessary physical and digital assets for reading promotion; the user service system affects the communication and interaction with readers; and the standardization system provides guidelines for the evaluation of reading promotion work. Based on these findings, practical suggestions were put forward, including optimizing the training model by combining theoretical learning with practical operation and establishing a standardized management system for reading promotion. Nevertheless, the study has certain limitations, primarily due to its relatively small sample size. Future research could expand the scope of the sample, conduct long-term follow-up studies on the impact of professionalization, and explore integrating emerging technologies such as AI and big data into the professionalization of reading promotion to further promote the development of library reading promotion services.

  • ZHANGNing, HEBoyun
    Journal of library and information science in agriculture. https://doi.org/10.13998/j.cnki.issn1002-1248.25-0345
    Accepted: 2025-09-23

    [Purpose/Significance] The global population is aging at an unprecedented pace. As a key tool to address the challenges of digital inclusiveness for the elderly, developing a digital capital scale is of utmost importance. Digital capital not only encompasses the abilities and skills of the elderly in using information technology, but also focuses on the interaction among the social resources, cultural capital, and economic capital they acquire in the digital environment. Therefore, it helps enhance the theoretical understanding of the heterogeneity of the elderly's digital capabilities. [Method/Process] First, a semi-structured interview method was adopted to conduct in-depth interviews with 24 elderly individuals based on the digital capital framework, and combined with the digital life scenarios in China. We also referred to existing studies on the digital literacy and digital capabilities of the elderly. Based on the coding results of the interview transcripts, a 7-dimensional scale for measuring the digital capital of the elderly was derived. Then, a preliminary reliability and validity analysis was conducted on a pre-test sample of 180 respondents, and the dimension indicators were appropriately adjusted. Subsequently, using the data from 380 formal questionnaires, the scale was verified and improved. Based on the principle of conceptual interpretability, the factor names of the four dimensions were re-examined, and the final version of the scale was established. Elbow estimation and the K-means clustering algorithm were then used to classify the digital capital levels of the elderly. [Results/Conclusions] The final scale consists of 19 items, covering four dimensions: digital resource acquisition ability, digital creation and expression ability, digital environment adaptation ability, and digital tool learning ability. Following optimization, the scale demonstrates excellent reliability and validity, and aligns closely with the aging-friendly scenarios. The tool can be used as a standardized tool to measure the digital capital level of the elderly population in China, laying the foundation for future large-scale surveys. By applying this scale, it is possible to effectively distinguish between groups of elderly individuals with varying levels of digital capital, providing empirical support for personalized digital services for the elderly people. For the first time, this study systematically applies the digital capital theoretical framework to the elderly population, which compensates for the lack of standardized measurement tools and highlights the unique needs and challenges of the elderly in terms of the dimensions, usage scenarios, and capability transformation. The proposed hierarchical model of digital capital among the elderly deepens our theoretical understanding of the differences in digital capabilities among this population.

  • QINMiao, WANGQingfei
    Journal of library and information science in agriculture. https://doi.org/10.13998/j.cnki.issn1002-1248.25-0259
    Accepted: 2025-09-17

    [Purpose/Significance] With the rapid advancement of artificial intelligence (AI) technologies, libraries are transforming their service models and content offerings. Large AI models have opened up broader development opportunities for smart libraries. However, the rational adoption and application of these models has posed a significant challenge to libraries. This study employs multimodal resource profiling to conduct research on the optimization of large AI model utilization in libraries, revealing the intrinsic relationships among various types of library resource data. Based on these insights, the optimization methods and related strategies are extracted to enhance the efficiency of library resource utilization and improve user experience. [Method/Process] Multimodal resource profiling is a comprehensive representation that captures the intrinsic characteristics of library resources through tag extraction, aggregation analysis, and visualization of diverse data generated within the libraries. By utilizing a novel clustering algorithm, it overcomes the high sensitivity to input parameters characteristic of traditional algorithms and achieves natural clustering across resources with varying densities, thereby enabling the generation of accurate multimodal resource profiles. The resource profiling model provides a theoretical foundation for optimizing the deployment and utilization of large AI models in libraries, while also delivering rich data support for subsequent AI model applications. The adoption strategy proposed in this study is divided into two aspects: model selection and model utilization. Model selection focuses on compatibility and accuracy to achieve an optimal match between the model and both library resources and user needs. Model utilization emphasizes the effectiveness and usability of the output, thereby enhancing operational efficiency and user experience. Based on this framework, the overall operational mechanism of the adoption optimization strategy is designed around continuous model monitoring, real-time collection of user feedback, iterative model updates, and dynamic adjustment of multimodal resource profiles. [Results/Conclusions] This study takes a public digital library on "Telegram" as a case study to generate multimodal resource profiles, which meticulously categorize user groups, interests, and emotional intensities. By integrating the large AI model adoption optimization strategy with the outcomes of multimodal resource profiling, the model autonomously identifies the most task-relevant features, reducing the need for manual intervention. Not only does it achieve high prediction accuracy, but the explanatory feature weights it outputs also provide a quantifiable basis for service optimization. Through comparative experiments with commonly used structural modules, the proposed method demonstrates significant advantages over traditional recommendation systems in terms of both resource utilization efficiency and user engagement. This study lays the foundation for the future development of library technology and opens up new possibilities for the application of multimodal resource profiling.

  • GUOXiaojing, WENTingxiao
    Journal of library and information science in agriculture. https://doi.org/10.13998/j.cnki.issn1002-1248.25-0397
    Accepted: 2025-09-16

    [Purpose/Significance] In today's knowledge economy, where scientific research and innovation drive social change, accurately and scientifically assessing the social impact of scientific research achievements has become key to optimizing the global scientific research ecosystem. This article focuses on the social impact evaluation system of the international scientific research achievement. It provides in-depth analysis of typical international models and strategic guidance for China to build a more comprehensive and efficient evaluation system. [Method/Process] Based on the theoretical definition of the social impact of scientific research achievements, eight major cases of third-party evaluations were selected: the EU SIAMP, the US STAR METRICS, the UK REF, the Dutch SEP, the Italian VQR, the Canadian CAHS, the Australian ERA, and the Japanese NIAD-QE. Using a cross-national comparative analysis method, a comprehensive analysis was conducted across three dimensions: system elements (establishment time, establishing entity, main characteristics, evaluation scope, and strategic objectives), mechanism processes (definition of evaluation objects, establishment of evaluation procedures, application of evaluation results), and methodological tools (definition of social impact-related content, evaluation methods, and indicator content). Subsequently, relevant information was collected through literature research and online research to identify key characteristics. [Results/Conclusions] International evaluation systems are guided by national strategic needs and incorporate social impact into the entire research lifecycle management process through legislation. These systems also link influence to funding allocation. These systems operate using policy-driven mechanisms, collaborative efforts among stakeholders, data-driven methodologies, and dynamic feedback loops. The key characteristics of typical international research evaluation models are as follows: 1) Multi-dimensional indicators: Moving beyond traditional academic metrics, evaluation frameworks now encompass a wide range of impacts, including the effects of research outcomes on social welfare, industrial development, and employment. 2) Dynamic adjustment: As the socio-economic and technological environment evolves, the social impact evaluation systems of international research outcomes also undergo dynamic adjustments and innovations. 3) Multi-stakeholder collaboration: This involves diversified participation, cross-disciplinary and cross-departmental collaboration, and the full involvement of stakeholders throughout the process. Based on the above findings, this study offers insights at different stages of social impact assessment of scientific research achievements. Prior to implementation, additional indicators aligned with domestic strategic priorities, such as environmental sustainability, social equity, and cultural heritage preservation, should be incorporated alongside traditional metrics, and the policy and legal framework should be refined. During implementation, a multi-stakeholder collaborative evaluation platform should be established, and a dynamic system incorporating resilience coefficients should be developed to address uncertainties. After completion, a long-term monitoring and tracking mechanism should be implemented to understand ongoing impacts, with feedback-driven updates to the indicator system. This approach aims to foster a healthy evaluation ecosystem, accelerate the translation of research outcomes into societal value, and promote the integrated development of scientific research and social progress.

  • WEITianyu, LIUZhongyi, ZHANGNing
    Journal of library and information science in agriculture. https://doi.org/10.13998/j.cnki.issn1002-1248.25-0142
    Accepted: 2025-04-27

    [Purpose/Significance] Under the background of digital government construction, as a new type of service subject of human-machine collaborative governance, the influence mechanism of the social role positioning of government digital humans on public adoption behavior urgently needs theoretical exploration. Most existing studies have focused on the technical level. This study, based on the perspective of social role theory, explores the influencing mechanism of different role positioning of government digital humans in government service scenarios on public adoption behavior, which is of great significance for optimizing government services and improving the intelligent level of government services. [Method/Process] An experimental research method was adopted to construct a two-factor inter-group experimental design of "social role-business type", and a simulation experiment of government service scenarios was carried out through random grouping. Based on previous studies, we defined the role positioning of "advisors" and "decision-makers" for government digital humans, and constructed experimental scenarios by combining two service scenarios of consultation and approval. The subjects were randomly grouped to complete the role cognition test and human-computer interaction tasks. Data were collected by using the research path combining situation simulation and questionnaire survey. The psychological mechanism and decision-making logic of the public's adoption behavior were analyzed through the data analysis results. [Results/Conclusions] The research findings are as follows: 1) There is a significant interaction effect between the social roles and business types of government digital humans. In approval service scenarios, the decision-maker role is more capable of promoting public adoption behavior than the advisor role; 2) Human-computer trust perception plays a crucial mediating role in the influence path of social roles on the public's adoption behavior, revealing the core value of the trust mechanism in human-computer interaction; 3) The synergy effect between role authority and task fit constitutes an important mechanism influencing public cognition. This study expands the explanatory boundary of the social role theory in the field of intelligent government services and provides theoretical support for the construction of smart government services. However, there are still certain limitations. The service scenario simulation in the experimental design is difficult to fully restore the complexity of real government services. Future research can extend the multi-dimensional role classification system and deepen the mechanism exploration by combining the mixed research method. We have verified the applicability of the theoretical model in real government service scenarios and expand the existing conclusions. In addition, exploration on the dynamic impact of long-term interaction between government digital humans and the public on behavioral evolution is also a potential research direction.

  • WANG Yuanming, WANG Xueli
    Journal of library and information science in agriculture. https://doi.org/10.13998/j.cnki.issn1002-1248.25-0065
    Accepted: 2025-04-17

    [Purpose/Significance] The ongoing digital transformation has led to significant changes in public cultural services, particularly in content generation, communication channels, and modes of public participation. "Accessibility," a key indicator of the extent to which citizens' cultural rights are realized, is typically assessed along four dimensions: availability, acceptability, accessibility, and adaptability. Previous research has focused primarily on the supply side of accessibility, examining how factors such as the distribution of cultural resources, infrastructure development, and policy support affect user engagement. However, with the widespread adoption of digital technologies, individuals' ability and willingness to access information, utilize services, and provide feedback - collectively referred to as "digital literacy" - has become an increasingly important variable influencing cultural participation. Consequently, this study seeks to explore the relationship between users' digital literacy and the accessibility of public cultural services from a demand-side perspective. It aims to provide a more systematic theoretical framework and practical approach to optimizing the effectiveness of public cultural services. [Methods/Process] This study assesses users' digital literacy by examining their level of digital access, Internet usage, and service availability based on data collected from the Beijing-Tianjin-Hebei region. A structured questionnaire yielded 892 valid responses. To analyze the relationship between users' digital literacy and the accessibility of public cultural services, the study applies a generalized ordered logit model. A generalized ordered logit model is employed to analyze the substitution and overlap effects between users' digital literacy and the various dimensions of service accessibility. [Results/Conclusions] There is currently a digital divide exists between different demographic groups. A significant substitution effect is observed between traditional public cultural accessibility and users' digital literacy, with limited overlap between the two. Digitization has driven the modernization of public cultural resources and services, particularly in terms of technology and service delivery. However, there remains a time lag between the users' digital literacy of users and the digital transformation of the public cultural supply side. This lag suggests that the digital needs of users and the availability of digital cultural services are not fully aligned, which negatively impacts the effectiveness of public cultural services. Therefore, enhancing users' digital literacy, especially improving their ability to adapt to digital cultural resources, is a crucial factor in transitioning public cultural services from "accessibility" to "enjoyment". In promoting the digital upgrading of public cultural services, greater emphasis should be placed on developing users' capabilities and anticipating their needs.

  • Yuhong CUI, Jintao ZHAO
    Journal of Library and Information Science in Agriculture. https://doi.org/10.13998/j.cnki.issn1002-1248.24-0721
    Accepted: 2025-04-02

    [Purpose/Significance] The development of artificial intelligence generated content (AIGC) technology has engendered novel prospects for the establishment of creating inclusive and expansive learning environments. In light of the potential risks associated with the misuse of AIGC tools, the present study analyzes the factors influencing students' use of AIGC tools within the context of artificial intelligence literacy. It constructs a conceptual model framework and explores the relational paths among influencing variables, aiming to provide a theoretical basis for the advancement of AI literacy education in libraries and other educational institutions. [Method/Process] This study adopts a mixed-method approach that primarily integrates Structural Equation Modeling (SEM) and mediation analysis to explore the relationships between the factors that influence AIGC tool usage. A conceptual relationship model was constructed based on the Technology Acceptance Model (TAM), which is widely utilized model for assessing users' acceptance of new technologies. The study builds on this model by adding AI literacy as a key variable to examine its moderating role in shaping the students' use of AIGC tools. The data were collected via a survey disseminated to university students who have used AIGC tools. The survey incorporated a series of inquiries designed to assess constructs such as effort expectancy, performance expectancy, behavioral intention, AI literacy, and actual usage of the tools. The SEM approach was employed to assess the proposed hypotheses and to validate the relationships between the identified factors. Mediation analysis was employed to assess indirect effects between variables. [Results/Conclusions] The findings indicate that effort expectancy exerts a direct impact on the actual use of AIGC tools by students, and indirectly promotes usage behavior through performance expectancy and behavioral intention. Furthermore, AI literacy plays a crucial role in improving the conversion rate from intention to actual usage. Specifically, AI literacy significantly enhances students' acceptance of AIGC tools, especially in terms of increasing their practical ability to use these tools effectively. The research also identifies key factors that influence students' use of AIGC tools, such as performance expectancy, effort expectancy, and behavioral intention, and highlights the significant moderating effect of AI literacy on the relationships among these factors. This study provides empirical evidence for the effective integration of AIGC technology into the education sector and offers theoretical guidance for libraries and educational organizations on how to design AI literacy education programs that help students adapt to a digitally driven society. Future research may encompass a more extensive examination of the utilization of AIGC tools across different academic disciplines, with a particular emphasis on their implementation in specialized domains. Additionally, the proposed model may be refined to better accommodate a wider range of educational contexts and learning scenarios.

  • Chaochen WANG, Ayang QI, Xiaoqing XU, Linwei CUI
    Journal of Library and Information Science in Agriculture. https://doi.org/10.13998/j.cnki.issn1002-1248.24-0760
    Accepted: 2025-03-31

    [Purpose/Significance] Research on the behavioral stimulation of online social platform interactions triggered by IP-based games on users' active learning and reading, the social demands within gaming communities and their derivative reading-sharing interactions constitute dual intrinsic motivations that promote autonomous reading behaviors. Exploring new developmental directions for reading promotion through digital game dynamics and group-based social guidance provides broader research perspectives for innovative knowledge acquisition and pedagogical learning paradigms. [Method/Process] Based on the game "Black Myth: WUKONG" as the research background, we collected relevant comments and original texts from four social media platforms. Using the LDA model for topic classification of the effectively collected data, users who demonstrated marked behavioral tendencie towards book-related engagement and reading activities attributable to the "WUKONG" game experience were manually identified from the aforementioned dataset. We explored the details of their social discourse and the behavior of user accounts through user-account backtracking, studying the factors that stimulate users' interest in reading and active reading behavior. [Results/Conclusions] Analysis of the five thematic clusters identified by the LDA modeling revealed that in the user behaviors focused on the theme of cultural exploration, 38.3% of the user behavior data showed increased exploratory engagement with original literary works and related content during "WUKONG" mediated group interactions. Whether this interactive exploration closely connects to reading habits needs further study. Further research has shown that a portion of users were influenced in their subsequent behaviors by this game and social interaction. Through text mining of user content on key topics, analysis revealed that 61.15% of user accounts had no prior engagement history, representing first-time participants in the cultural learning interactions of the "WUKONG" game. Notably, 23.7% of this cohort spontaneously expressed self-directed reading intentions during the game-social scenario. As the dominant subgroup in the dataset, their behavioral patterns suggest that gamified social platforms may serve as critical trigger mechanisms. It was found that the factors that stimulate users to read independently include competing for the right to speak in social interactions and obtaining gaming experiences. Accordingly, strategic practices for autonomous reading should accordingly be implemented through digital content guides, transmedia narrative interactions, and visual scene experiences. This research investigates the orienting mechanisms of digital games and community interactions in edutainment convergence, demonstrating both theoretical value and practical implications for user behavior analysis and reading promotion. While the study design ensured breadth of data collection, the heterogeneity of social attributes across platforms warrants further investigation. Subsequent studies should conduct platform-specific comparative experiments to strengthen the empirical foundation for behavioral intervention strategies.

  • Zhihao GUAN, Zhiyi SHAN, Tian LI, Ruixue ZHAO
    Journal of Library and Information Science in Agriculture. https://doi.org/10.13998/j.cnki.issn1002-1248.24-0764
    Accepted: 2025-03-18

    [Purpose/Significance] To address the problem of semantic ambiguity and soybean breeding knowledge that needs to be revealed in depth, a structured knowledge model was established to thoroughly discuss the definition of key concepts and their interactions involved in the breeding process, standardize the definition and organization of soybean breeding knowledge, and promote the unified expression of knowledge. [Method/Process] By analyzing the characteristics of knowledge structure in the field of soybean molecular breeding, according to the seven-step method of Stanford ontology construction, the semantic model of soybean molecular breeding was established by using the ontology construction tool protege 5.6.3. A total of 48 classes were constructed in the soybean breeding concept ontology, which clarified the concepts and hierarchical associations among concepts under traits, compounds, enrichment pathways and growth classification. Seven types of causal relationships and three types of static relationships were defined. Finally, the ontology-based knowledge graph was presented based on a PubMed literature, and the knowledge unit with Dt1 gene as the central node was queried. [Results/Conclusions] This study integrated the existing knowledge base and ontology related to soybean breeding, established a knowledge model at the biomolecular level in the field of soybean breeding, and provided a certain reference for knowledge sharing and semantic integration in this field. Compared with the existing knowledge models, this study analyzed the characteristics of knowledge structure in soybean breeding, extracted the key entity types and relationship types in the process of hypothesis generation, and constructed an ontology model based on this, which could describe gene expression patterns in soybean growth and development more comprehensively. This is of great significance for discovering the key genes associated with specific traits and analyzing the molecular regulatory networks formed by traits, which will help to accurately design and optimize breeding strategies. The knowledge model constructed in this study could be applied to knowledge discovery, causal reasoning and other scenarios in soybean breeding, supporting experimental design and promoting interdisciplinary communication. The limitation of this study is that the ontology was constructed manually and no automated natural language processing method was used. In addition, in the subsequent use of soybean breeding knowledge model, it is necessary to keep up with the frontier of development in soybean breeding, expand new concept types, add new concept names and relationship names in time according to the knowledge description needs of field scientists, and regularly maintain and expand soybean breeding knowledge model.

  • Feiyan HUANG
    Journal of Library and Information Science in Agriculture. https://doi.org/10.13998/j.cnki.issn1002-1248.24-0723
    Accepted: 2025-02-25

    [Purpose/Significance] With the rapid development of international academic digital publishing, ebook sales have surpassed those of print books since 2020. Many academic libraries in the USA are adopting e-preferred book collection development policies. In recent years, China's academic ebook market has been growing rapidly, as the well-known academic publishers such as Science Press and Tsinghua University Press have successively launched their own ebook platforms, and high-quality ebook integrator platforms such as Keledge and Cxstar have emerged, leading to a qualitative leap in both the quality and quantity of Chinese academic ebooks. The rapid development of the Chinese and English ebook markets has made collaborative collection development of print and ebooks feasible, which is considered to be a good solution to the tight collection budget and insufficient library space for Chinese academic libraries. The paper aims to propose a strategic direction and implementation path for collaborative collection development of print and ebooks, and provide a reference for the high-quality development of academic library collection development under the background of "Double First Class" construction. [Method/Process] The paper summarizes the main characteristics of SUSTech Library's three phases of book collection development, and introduces SUSTech Library's collaborative collection development practices of print and ebooks from four aspects: collaborative acquisition, collaborative management, collaborative services, and collaborative evaluation. A collaborative acquisition strategy should be based on four factors: collection assurance, patron needs, library space, and cost effectiveness. Collaborative management should redesign the business process from the aspects of funding plan, book selection, acquisition implementation, metadata management, and statistics. Collaborative services should be based on the needs, recommendations, and usage of user groups and individual users. Collaborative evaluation refers to a structured analysis on the print and ebook collections across five dimensions, including language, subject, publisher, book type, and year of publication. [Results/ [Conclusions] The paper puts forward suggestions from three aspects, such as top-level design, deep integration of business processes, and integrated management platforms. The academic libraries' management team should develop collaborative collection development policies and strategies from the top down. Deep integration of business processes should support optimization of organizational structure and job position adjustment, which replaces the traditional approach of defining job positions and responsibilities based on a single material type or business process. Integrated management platforms should support the integration of multiple metadata sources and business processes, and have strong knowledge discovery and service capabilities.

  • Huimin HE
    Journal of Library and Information Science in Agriculture. https://doi.org/10.13998/j.cnki.issn1002-1248.24-0648
    Accepted: 2024-12-10

    [Purpose/Significance] As part of the plan to build a better new digital life, information literacy (IL) is a survival skill in the information age. As a major repository of information resources, public libraries are the backbone of public IL education. The research on readers' knowledge construction behavior of IL education in public libraries is not only helpful for readers to improve their individual IL and self-learning ability, but also constructive for the development of the national IL system. At present, relevant research mainly focuses on the framework and practical suggestions of IL education, and rarely analyzes the construction of readers' knowledge in IL education from the perspective of knowledge management and readers' behavior. [Method/Process] From the perspective of readers' knowledge construction, this paper logically extends to the necessary knowledge scene transformation in public library IL education, and finally rises to the level of development strategy on how readers can realize knowledge construction and innovation in public library IL education. Information literacy education in public libraries also includes explicit and implicit knowledge. Information literacy education activities in public libraries focus on sharing explicit knowledge, enhancing the value of collection knowledge, but more attention should be paid to the exploration of implicit knowledge, enhancing knowledge transformation and innovation of participating readers. The path of reader knowledge construction in IL education can be divided into individual knowledge, team knowledge, organizational level knowledge. According to the different levels of knowledge construction, four knowledge transformation modes constitute the spiral process of knowledge innovation. Based on the MOA model, this paper analyzes the motivational factors, opportunity factors, and ability factors of readers' knowledge construction in the IL education in public libraries, as well as the constituent elements of the knowledge construction community and their interactive relationships from a multi-dimensional dynamic perspective. The model of readers' knowledge construction in public library IL education from the perspective of MOA analyzes the influence of motivational factors, opportunity factors, and ability factors on the readers' knowledge construction behavior in the knowledge construction community of IL education. Motivation points to opportunities and skills, suggesting that motivation leads readers to seek opportunities and develop necessary skills; motivation, opportunity and ability, and knowledge construction community all point to readers' knowledge construction behavior, suggesting that together they influence and promote readers' knowledge construction process. [Results/Conclusions] Readers' knowledge construction in the MOA model is a dynamic and multi-stage process. In the IL education activities of public libraries, readers gradually promote their understanding of knowledge and apply it to practical situations by stimulating motivation, exploiting opportunities, improving skills, and ultimately constructing knowledge. At the same time, through the feedback evaluation stage, further enhance knowledge construction strategies, update development goals, and continue reform and innovation. This study is only a theoretical extension of practical work experience, and does not involve rigorous and formal data verification and case study. In the next step, the questionnaire will be used to collect sample data, and through hypothesis testing and model fitting, the application practice of the MOA model in readers' knowledge construction behavior will be further verified.