Home Browse Just accepted
Accepted

Accepted, unedited articles published online and citable. The final edited and typeset version of record will appear in the future.

Please wait a minute...
  • Select all
    |
  • WANGChao, CHENJie, HOUHui
    Journal of library and information science in agriculture. https://doi.org/10.13998/j.cnki.issn1002-1248.25-0730
    Accepted: 2026-02-13

    [Purpose/Significance] Against the global surge of generative artificial intelligence (GenAI) and large language models (LLMs), academic libraries are undergoing a critical paradigm shift in their reference services. While "AI Virtual Librarians" (AIVL) are increasingly adopted to enhance efficiency, cross-national evidence regarding how they are configured alongside traditional "Human Live Reference" (HLR) remains scarce. This study aims to reveal the structural differences in human-AI configurations between Chinese and international top-tier university libraries. It seeks to identify the divergence between "technology-driven" and "human-centric" service models and proposes a governance-oriented hybrid pathway to inform the digital transformation of academic libraries. [Method/Process] The study established two high-resource samples: 42 libraries from China's "Double First-Class" universities and 94 libraries from the U.S. News Top 100 World Universities. A systematic website investigation and standardized interaction tests were conducted to collect data on service availability and deployment models. The study not only quantified the deployment of HLR and AIVL (classified into rule-based and LLM-based) but also qualitatively evaluated the "Core Service Contents" and "Linkage Mechanisms" (e.g., traceability, boundaries, and human fallback). Chi-square tests were employed for statistical analysis, and robustness checks were performed using both broad and strict counting rules to ensure validity. [Results/Conclusions] Results indicate that while the overall service coverage is similar across groups (approx. 74%), the service structure diverges significantly. International libraries predominantly rely on the "Human-only" mode (66.0%), prioritizing deep research support, academic integrity, and privacy protection. In contrast, Chinese libraries show a significantly higher adoption of AIVL (57.1% vs. 8.5%) and LLMs (26.2% vs. 1.1%), with 52.4% operating in an "AI-only" mode. Content analysis reveals that Chinese AIVLs focus on transactional efficiency and 24/7 accessibility, whereas international counterparts focus on distinct research guides and governance. The study identifies a critical trade-off: China's aggressive AI adoption enhances accessibility but faces challenges regarding answer hallucinations and the lack of human fallback mechanisms. To address these challenges, the paper recommends a "Human-AI Collaborative Loop" model. Key strategies include: 1) Implementing risk-tiered routing, where low-risk transactional queries are handled by AI and high-risk research inquiries are directed to humans; 2) Optimizing AI reliability through Retrieval-Augmented Generation (RAG) and controlled knowledge bases to ensure traceability; 3) Establishing clear governance boundaries and stratified implementation paths for libraries with different resource levels, ensuring a balance between technological innovation and service ethics.

  • LVKun, YULinrong, WENYuzhu, LiBeiwei
    Journal of library and information science in agriculture. https://doi.org/10.13998/j.cnki.issn1002-1248.25-0519
    Accepted: 2026-02-12

    [Purpose/Significance] The governance of health medical data is fundamentally challenged by the "protection-sharing" paradox: the critical need to safeguard sensitive personal information often conflicts with the desire to utilize these data for public benefit. This issue is particularly pressing under China's "Healthy China" initiative, which promotes data sharing while the rapid expansion of medical APPs has led to increasing data misuse incidents. Existing research has extensively explored technological solutions such as blockchain, but a significant gap remains in understanding the dynamic, strategic interactions among the key stakeholders - government regulators, APP operators, and users - who operate with bounded rationality. This study addresses this gap by constructing a tripartite evolutionary game model. Its primary significance lies in dynamically modeling the co-evolution of strategies to identify critical leverage points, thereby providing a theoretical basis for designing effective collaborative governance mechanisms that can reconcile data protection with utilization and ensure the sustainable development of the health data ecosystem. [Method/Process] This study established a three-party evolutionary game model involving government regulators, medical-health APP operators, and users, based on the core assumption of bounded rationality. The model incorporated a comprehensive set of parameters, including direct benefits, various costs (compliance, regulatory), data risks, and network benefits under different regulatory scenarios. Replicator dynamic equations were derived for each party to mathematically describe the evolution of their strategy choices over time. The stability of the system's equilibrium points was rigorously analyzed using Lyapunov's first method to identify key stability thresholds. To validate the theoretical analysis and explore the dynamic evolutionary paths, numerical simulations were conducted using MATLAB. These simulations tested the impact and sensitivity of critical parameters - such as user-perceived data risk under operator self-discipline, user network benefits under dynamic regulation, government compliance rewards, and penalties for overdevelopment - from various initial strategy combinations. [Results/Conclusions] The analysis yielded several critical findings. First, users' authorization decisions are highly sensitive to the operational context, and they are significantly positively influenced by the perceived level of operator self-discipline and the observed intensity of government dynamic regulation. Enhancing user network benefits under effective regulation and reducing perceived data risks are paramount to encouraging authorization. Second, for APP operators, increasing government penalties for overdevelopment acts as a powerful deterrent, rapidly steering operators towards compliance. In contrast, government financial rewards for compliance, while effective, must be carefully balanced against their potential fiscal burden, which can slow the government's own stabilization into a dynamic regulatory role. Third, the system exhibits strong path dependence, capable of converging towards either an inefficient equilibrium (Non-Authorization, Overdevelopment, Passive Regulation) or the optimal Pareto state (Authorization, Self-discipline, Dynamic Regulation), depending heavily on initial conditions. The study concludes that resolving the paradox requires a multi-faceted strategy: advancing and ensuring robust anonymization technologies, implementing intelligent graded supervision that combines incentives and punishments, and firmly establishing institutional safeguards for user data sovereignty to build essential trust. A key limitation is the omission of data leakage risks from government data openness. Future work will integrate empirical data and consider user heterogeneity to refine the model.

  • ZHANGKeyong, WUShuang
    Journal of library and information science in agriculture. https://doi.org/10.13998/j.cnki.issn1002-1248.25-0701
    Accepted: 2026-02-12

    [Purpose/Significance] Against the backdrop of the digital wave and the Healthy China initiative, efforts to enhance national health information literacy face challenges, including an insufficient supply of high-quality popular science content and low public enthusiasm for its dissemination. This study aims to explore the internal driving forces, core influencing factors, and transmission paths of the willingness to share online health popular science information. It further intends to provide theoretical support for regulatory authorities and popular science platforms in formulating incentive policies and safeguard mechanisms, thereby promoting the participation of social entities in popular science dissemination, increasing the supply of high-quality popular science resources, and enhancing the health information literacy of the general public. [Method/Process] A three-stage research design of "Grounded Theory - Fuzzy DEMATEL - ISM" was adopted. Firstly, interview data from diverse groups were collected through semi-structured interviews. Grounded Theory was then applied to coding to extract initial influencing factors and construct a multi-dimensional driving force system. Secondly, Fuzzy DEMATEL was used to calculate the centrality and causality degrees, so as to identify key factors. Finally, the interpretive structural modeling (ISM) method was employed to integrate the influencing factors, establish a hierarchical structure, and clarify the transmission logic and action mechanism. This method not only enables the acquisition of the most original influencing factor system from interview materials but also reveals the interaction relationships among these factors, which is in line with the research requirements and trends in the field of information science. [Results/Conclusions] The results of Grounded Theory analysis identified 13 influencing factors, which are categorized into four dimensions. The personal dimension includes four factors: interpersonal interaction traits, perceived utility, health information literacy, and self-efficacy. The information dimension consists of four factors: information quality, information source credibility, information richness, and information clarity. The platform dimension comprises two factors: interaction promotion mechanism and platform technology. The social dimension contains three factors: social economy, social public events, and the clustering effect. Fuzzy DEMATEL analysis indicated that perceived utility, health information literacy, information clarity, and social economy are the key factors. ISM analysis revealed a 4-layer hierarchical structure of influencing factors from the superficial to the deep, with the social economy being the deepest-layer factor. Additionally, four key transmission paths were sorted out. Based on the research conclusions, four suggestions are proposed: Firstly, from the personal dimension, efforts should be made to mobilize the subjective role of users. Secondly, from the information dimension, the information quality and clarity for content creators and sharers should be improved. Thirdly, from the platform dimension, active cooperation with content sharers should be pursued and the interaction mechanism should be optimized. Finally, from the social dimension, the government should promote the development of the health popular science industry. In subsequent studies, empirical tests (such as structural equation modeling and fsQCA) can be incorporated to ensure the reliability and validity of the theory.

  • WU Yuhao, ZHOU Zhigang, LIU Wei
    Journal of library and information science in agriculture. https://doi.org/10.13998/j.cnki.issn1002-1248.25-0727
    Accepted: 2026-02-12

    [Purpose/Significance] As the core hub for public cultural services and the inclusive dissemination of knowledge, the digital transformation of smart libraries is accelerating continuously. However, they also face multiple digital risks such as data fragmentation, insufficient technological adaptation, and prominent system vulnerabilities, which seriously constrain the stability and sustainability of public cultural services. The construction of digital resilience has become a key support for smart libraries to respond to environmental changes and ensure the realization of core functions. This paper focuses on the sustainable development demands of smart libraries in the digital age. Based on the dual-wheel drive perspective of "data elements-digital technology", it explores the generation logic and improvement path of digital resilience. This approach can not only provide a new dimension for improving the theoretical system of digital risk governance in smart libraries, but also provide practical solutions to solve real problems such as data fragmentation and insufficient technical adaptation. Furethermore, it can enhance the stability and efficiency of public cultural services. [Method/Process] Supported by theories of data governance, technological innovation and organizational resilience, this research adopts a progressive approach of literature review, logical deconstruction, framework construction and path optimization, and integrates literature research methods, system deconstruction methods and logical deduction methods. We systematically analyze the penetration and impact of data elements and digital technologies on the resources, services, technologies, and organizational dimensions of smart libraries, clarify the correlation logic and operational mechanism between dual-wheel drive and digital resilience, construct practical approaches from two aspects: the release of data element value and the collaboration of digital technology clusters, and provide a multi-dimensional guarantee system. [Results/Conclusions] The core essence of digital resilience in smart libraries lies in their dynamic adaptation, efficient response, and continuous evolution capabilities in the face of digital risks. Its formation relies on the deep collaboration between data elements and digital technologies: Data elements, by building a multimodal collaborative data ecosystem, break down information silos and lay a solid resource foundation for digital resilience. Digital technology, relying on the collaborative efforts of technology clusters such as big data, artificial intelligence, and blockchain, has formed a full-cycle risk response technology system covering risk perception, emergency response, and system recovery. The coupled interaction between the two promotes a qualitative leap in digital resilience from passive risk resistance to active value creation, ultimately achieving a deep integration and development driven by data elements - digital technology-driven and resilience construction. Based on this, practical suggestions are put forward. Smart libraries should strengthen the standardized construction of data governance, promote the scenario-based application of technology clusters, and improve the cross-departmental collaboration mechanism.

  • LIMei, YINMingzhang
    Journal of library and information science in agriculture. https://doi.org/10.13998/j.cnki.issn1002-1248.25-0735
    Accepted: 2026-02-12

    [Purpose/Significance] As digital technologies such as 5G and generative AI become more prevalent in higher education, university libraries have evolved from traditional collections of books to ecosystems of cross-modal and multi-source resources, encompassing core collection resources, open-access resources, and user-generated content. However, the "resource silo" issue caused by heterogeneous resources and the mismatch between passive services and dynamic user scenarios in research and teaching remain unresolved. Existing studies lack integrated closed-loop mechanisms linking resources, scenarios, and users. This study aims to address these gaps by promoting libraries' transformation from "resource storage centers" to "proactive knowledge service centers." Its key innovation lies in constructing a scenario-driven three-dimensional collaborative model, which bridges the disconnect between resource integration and scenario adaptation, providing theoretical and practical support for intelligent library development. [Method/Process] Guided by ERG demand theory and context-aware computing, this study adopts a mixed-methods approach combining literature research, technical design, and case validation. A three-dimensional collaborative model of "Resource Integration - Scenario Adaptation - Smart Services" was proposed. For resource integration, a "three-dimensional integration + four-step fusion" framework was developed: standardized access via unified DCAT-AP/RDA metadata and multi-protocol gateways, associative reorganization through cross-modal semantic matching and knowledge graph aggregation, and hierarchical storage (hot/warm/cold tiers). The four-step fusion includes data preprocessing, modality conversion (ViT, Whisper-large, YOLOv8 models), feature fusion (attention mechanism + Transformer encoder), and knowledge generation (knowledge graphs, rule bases). An innovative five-dimensional dynamic scenario model (S=f(P,R,S,T,C)) quantifies user profiles, resource attributes, spatial locations, temporal contexts, and social connections for precise scenario identification. Technically, a "cloud-edge-device" architecture provides support, while a hierarchical service pathway (instant/in-depth/customized services) and a multi-dimensional evaluation system (resource/service/user dimensions) ensure closed-loop optimization. [Results/Conclusions] The model effectively achieves in-depth integration of multi-source cross-modal resources and precise scenario adaptation. Validated through typical applications - full-cycle research support and immersive teaching (VR ancient book restoration, MR anatomy demonstration) - it significantly enhances resource utilization efficiency and user experience, resolving the core pain point of resource-scenario disconnection. The model strongly supports libraries' transformation from passive resource supply to proactive knowledge services. Limitations include limited application of cross-modal technologies to virtual reality resources, insufficient coverage of management and social service scenarios, and the need for long-term validation of the evaluation system. Future research will deepen large-model-aided cross-modal fusion, expand scenario coverage, improve the evaluation system with third-party participation, and promote inter-university resource sharing to better support higher education development.

  • JIANGJiping
    Journal of library and information science in agriculture. https://doi.org/10.13998/j.cnki.issn1002-1248.25-0739
    Accepted: 2026-02-12

    [Purpose/Significance] With the accelerated convergence of artificial intelligence and the metaverse, smart library information services are undergoing a profound transformation from tool-oriented functional optimization toward holistic cognitive support. Traditional information retrieval and service models increasingly struggle to explain and support complex cognitive activities involving multi-agent collaboration, contextual awareness, and continuous knowledge construction. From the perspective of human-machine-environment collaborative cognition, this study aims to explore the paradigm shift of smart library information services in intelligent digital environments and to establish an integrated theoretical framework that coordinates technological systems, cognitive processes, and contextual factors, thereby providing a systematic theoretical foundation for service model innovation and capability enhancement in smart libraries. [Method/Process] This study first reviews the evolutionary trajectory of information search paradigms - from symbolic computation and semantic understanding to social perception - through systematic literature analysis. We proposed Ecological Search as an emerging paradigm. Drawing on distributed cognition, embodied cognition, and information ecology theories, a human-machine-environment cognitive symbiosis search architecture was constructed, driven by a dual core of social multi-agent communities and contextualized metaverse environments. The architecture operates through an inner-outer dual-loop mechanism consisting of environmental perception and intention emergence, federated retrieval and knowledge fusion, collaborative generation and narrative construction, and cognitive evolution and ecological calibration. Furthermore, an "interaction-knowledge-context" three-dimensional analytical model was developed to decompose key service capabilities and derive differentiated integration pathways under diverse service objectives. [Results/Conclusions] The study proposed three smart library information service models: interaction-enhanced integration, knowledge-reconstructive integration, and context-immersive integration, and clarified how a unified cognitive architecture can be flexibly configured for different user groups and service scenarios. The findings indicate that the ecological search paradigm transcends system-centered instrumental rationality and reconceptualizes information search as a human-machine-environment collaborative process supporting continuous cognitive construction. By integrating multi-agent systems and contextualized environments, this paradigm provides essential mechanisms for smart libraries to move beyond information provision toward advanced cognitive support. The study offers theoretical insights and practical implications for achieving an ecological transformation of smart library information services while balancing technological innovation and human-centered values.

  • LI Jie, ZHANG Xingwang, QIAN Guofu, WEI Zhipeng
    Journal of library and information science in agriculture. https://doi.org/10.13998/j.cnki.issn1002-1248.25-0716
    Accepted: 2026-02-10

    [Purpose/Significance] With a new round of technological innovation and industrial transformation represented by artificial intelligence, the strategic value of data as a key production factor has become increasingly prominent. Data have risen to become an important driving force for reshaping national competition and driving economic growth. Therefore, analyzing the construction plan of the UK's National Data Library (NDL) can provide a useful reference and insight for the development of China's data factor market and the high-quality development of China's data industry. [Method/Process] The UK NDL construction project is an initiative promoted by the Department for Science, Innovation and Technology (DSIT) of the UK government, aimed at building a "Great British Data Library" for the era of artificial intelligence, and establishing a national-level data infrastructure and AI data facility for cross-government, cross-sector, and cross-department data sharing. Based on an investigative analysis of the UK NDL construction plan, this article examines the origins, goals, steps, and challenges of the NDL construction, compares relevant planning documents with China's policies and measures regarding data elements, and further explains the key implications for China from four aspects: top-level design, implementation operations, value sharing, and ecosystem. [Results/Conclusions] The UK's NDL construction plan offers a deeper insight into the development of China's data element market because its focus is shifting from the physical "aggregation of data resources" to the systematic "construction of a data ecosystem". The UK's NDL construction has a strong economic and instrumental character. Its core goal is to leverage public data sharing to gain innovative returns and economic growth for private enterprises. In contrast, China places more emphasis on the empowerment of industry, technology, and society, stressing the role of data elements in driving the transformation and upgrading of various industries, serving broader economic development and the modernization of social governance. In building a national data infrastructure, China should regard the cultivation and construction of a data ecosystem as a systematic social project, establishing a multi-stakeholder data ecosystem involving government, industry, academia, and the public. The high-quality development of the national data industry and the construction of a data element market require us to maintain clarity and determination in top-level design, flexibility and pragmatism in implementation, fairness and innovation in value sharing, and ultimately inclusiveness and trust within the ecosystem. China possesses more abundant data resources, a more complete data environment, stronger social organizational capacity, and more comprehensive digital infrastructure. If it continues to innovate in areas such as a scientifically and reliably structured data element market, refined data governance frameworks, flexible and inclusive data regulatory environments, and healthy and sustainable data ecosystems, China will be able to more safely and efficiently realize the diffusion effects of data element value, forming a uniquely Chinese paradigm in the global competition of data governance.

  • ZHANG Ling
    Journal of library and information science in agriculture. https://doi.org/10.13998/j.cnki.issn1002-1248.25-0683
    Accepted: 2026-02-03

    [Purpose/Significance] This study aims to systematically examine the application of simulation modeling in bibliometrics and to clarify its methodological position within the broader framework of digital humanities tools and agricultural knowledge services. In particular, the paper highlights the innovative potential of integrating simulation modeling with generative artificial intelligence, which enables more flexible representation of heterogeneous behaviors and context-dependent decision-making processes. By bridging bibliometrics, digital humanities tools, and agricultural knowledge services, this research contributes to the theoretical advancement of bibliometric methodology and provides a structured foundation for future applications in agricultural information practice. [Method/Process] This study adopts a systematic literature-based analytical approach to review and synthesize major simulation modeling methods applied in bibliometrics. The analysis covers several representative categories of simulation models, including dynamic modeling of classical bibliometric laws, evolution models of co-authorship and citation networks, multi-agent-based simulation, information and knowledge diffusion models, and evolutionary game-theoretic models. These methods are examined with respect to their modeling objects, underlying assumptions, key parameters, and analytical capabilities. Rather than organizing the review solely by research topics, this study emphasizes simulation modeling logic as the central analytical thread. Each category of simulation method is analyzed in terms of how micro-level rules and interactions generate macro-level bibliometric patterns. Particular attention is paid to the role of digital humanities tools in operationalizing these models, especially through visualization, system integration, and interactive simulation environments that facilitate exploration and interpretation. In addition, this study introduces recent advances in generative artificial intelligence, particularly large language model-based agents, as an extension of traditional multi-agent simulation. By incorporating generative AI into simulation frameworks, it becomes possible to model heterogeneous agents with richer cognitive representations, adaptive behaviors, and contextual reasoning abilities. The methodological discussion draws on theoretical foundations from bibliometrics, complex systems, and computational social science, while also considering practical constraints related to data availability, model calibration, and validation. [Results/Conclusions] The analysis demonstrates that simulation modeling significantly enhances the explanatory power of bibliometric research by revealing dynamic mechanisms behind literature growth, collaboration structures, and knowledge diffusion processes. Compared with traditional static indicators, simulation-based approaches provide deeper insights into how bibliometric patterns emerge and evolve over time. The integration of generative artificial intelligence further expands this capability by enabling more realistic modeling of behavioral heterogeneity and context-sensitive decision-making among research actors. From an application perspective, the study shows that simulation models and associated digital humanities tools can be effectively embedded into agricultural knowledge service workflows. These applications include research evaluation, scientific information services, and policy communication, where simulation-based scenario analysis can support strategic planning and decision-making. At the same time, the study identifies several challenges, including data quality constraints, computational costs, and issues related to model interpretability and transparency. The findings suggest that future research should focus on improving data integration, enhancing model validation strategies, and further exploring the integration of generative AI to support more adaptive and explainable simulation systems. By doing so, simulation-based bibliometrics can play a more substantial role in advancing agricultural information services and research management in complex, data-intensive environments.

  • LIANG Xiaodong, WANG Ru, WANG Shuaijin, XU Dongmei
    Journal of library and information science in agriculture. https://doi.org/10.13998/j.cnki.issn1002-1248.25-0655
    Accepted: 2026-02-03

    [Purpose/Significance] Unleashing the consumption potential of rural residents plays a pivotal role in expanding domestic demand and cultivating new economic growth points. The digital economy, driven by data elements as the core force, is gradually becoming a key engine to activate the consumption potential of urban and rural areas in China and promote consumption upgrading. National Big Data Comprehensive Pilot Zones (NBDCPZs), with their "agglomeration of data elements, cross-domain collaborative empowerment, and precise service matching", continuously meet the personalized and diversified consumption demands of rural residents, and have unique value in unleashing the consumption potential of rural residents. [Method/Process] After conducting a theoretical analysis of the impact of the NBDCPZs on the consumption potential of rural residents, this study formulates corresponding research hypotheses. This study uses data from the China Family Panel Studies (CFPS) from 2010 to 2022 and considers the "National Big Data Comprehensive Pilot Zones" policy as a quasi-natural experiment. On the basis of measuring rural residents' consumption potential using the propensity score matching (PSM) method, the difference-in-differences (DID) method is employed to evaluate the impact of NBDCPZs construction on rural residents' consumption potential. [Results/Conclusions] The research findings are as follows: 1) After balancing the endowment characteristics of urban and rural households via the PSM method, the per capita consumption expenditure of rural residents was found to be 2 255.23 yuan less than that of urban residents. This indicates that rural areas still have enormous untapped consumption potential. 2) The construction of NBDCPZs significantly promotes the release of rural residents' consumption potential, and this conclusion remains robust after undergoing the parallel trend test, placebo test, counterfactual test, addition of fixed effects, and exclusion of the impacts of other policies. 3) An analysis of heterogeneity across sample household and regional characteristics reveals that the effect of NBDCPZs construction on unlocking rural residents' consumption potential is particularly prominent in eastern China, and is more salient in rural households with a male household head, low income, and middle-aged composition. 4)The mechanism of action indicates that the "National Big Data Comprehensive Pilot Zones" policy releases the consumption potential of rural residents by increasing their income levels and enhancing technological progress in rural areas. Furthermore, household debt exerts a positive moderating effect on the process of releasing rural residents' consumption potential through the construction of the National Big Data Comprehensive Pilot Zones. Based on the research conclusions, the following countermeasures and suggestions are put forward: 1) Advance the differentiated layout and integrated application of rural digital infrastructure; 2) Establish a long-term mechanism for enhancing rural residents' digital literacy; 3) Optimize the income increase system for rural residents and consolidate the foundation for consumption upgrading.

  • LYULucheng, ZHOUJian, SUNWenjun, ZHAOYajuan, HANTao
    Journal of library and information science in agriculture. https://doi.org/10.13998/j.cnki.issn1002-1248.25-0672
    Accepted: 2026-01-23

    [Purpose/Significance] The use of large language models (LLMs) for patent text mining has become a major research topic in recent years. However, existing studies mainly focus on the application of LLMs to specific tasks, and there is a lack of systematic evaluation of the application effects of fine-tuned LLMs across multiple scenarios. To address this problem, this study takes ChatGLM, an open-source LLM that supports local fine-tuning, as an example. We conduct a comparative evaluation of three types of patent text mining tasks-technical term extraction, patent text generation, and automatic patent classification-under a unified experimental framework. The performance of fine-tuned models is compared from six aspects: different training data sizes, different numbers of training epochs, different prompts, different prefix lengths, different datasets, and single-task versus multi-task fine-tuning. [Method/Process] This study was based on an open-source LLM and carried out fine-tuning research for specific patent tasks in order to clarify the impact of different fine-tuning strategies on the performance of LLMs in patent tasks. Considering task adaptability, model size, inference efficiency, and resource consumption, ChatGLM-6B-int4 was selected as the base model, and P-Tuning V2 was adopted as the fine-tuning method. Three categories of patent tasks are included: extraction, generation, and classification. The extraction task is patent keyword extraction. The generation tasks include: 1) innovation point generation; 2) abstract generation based on a given title; 3) rewriting an existing title; 4) rewriting an existing abstract; 5) generating novelty points based on an existing abstract; 6) generating patent advantages based on an existing abstract; and 7) generating patent application scenarios based on an existing abstract. Six experimental comparison dimensions are designed: 1) different training data sizes; 2) different numbers of training epochs; 3) different datasets with the same data size; 4) different prompts under the same task and data; 5) different P-Tuning V2 prefix lengths with the same training data; and 6) single-task fine-tuning versus multi-task fine-tuning. Two type of evaluation metrics were used. For extraction and generation tasks, the BLEU metric based on n-gram string matching was adopted. For classification tasks, accuracy, recall, and F1 score were used. [Results/Conclusions] Based on the fine-tuning results, several conclusions were obtained. First, a larger training data size does not always lead to better performance. Second, the appropriate number of training epochs depends on the data size. Third, under the same data distribution, different data subsets have limited influence on performance. Fourth, under the same task and dataset, different prompts have little impact on model performance. Fifth, the optimal prefix length is closely related to the training data size. Sixth, for a specific task, single-task fine-tuning performs better than multi-task fine-tuning. These conclusions provide reference and guidance for fine-tuning LLMs in practical patent information work.

  • GUOHailing, ZENGMeiyun, FENGYuxi
    Journal of library and information science in agriculture. https://doi.org/10.13998/j.cnki.issn1002-1248.25-0568
    Accepted: 2026-01-22

    [Purpose/Significance] Against the backdrop of national innovation-driven development strategies and the pressing need to enhance the efficiency with which scientific and technological achievements are transformed within universities, university libraries are undergoing a critical transition. They are shifting from being traditional, passive information providers to becoming proactive, embedded partners in the research and innovation value chain. However, this transition is often hampered by inherent limitations in traditional service models. This study, therefore, posits artificial intelligence (AI) as a pivotal enabler and investigates the specific mechanisms through which AI technologies can empower university libraries to achieve deep, systemic integration into the entire lifecycle of technology transfer. The research aims to provide a comprehensive theoretical framework for understanding this transformation and offer actionable, evidence-based practical pathways for academic libraries to redefine their functional boundaries and substantially strengthen the institutional support ecosystem for university technology transfer. [Method/Process] This research employs a qualitative multi-case study design, underpinned by an analytical framework constructed around the four critical, sequential stages of the technology transfer lifecycle: 1) research topic selection and project initiation, 2) research and development, 3) project conclusion and evaluation, and 4) marketization and industrialization of outcomes. Case selection followed purposive sampling criteria to ensure representation across diverse contexts, including domestic and international universities, as well as varied library types. The primary data comprised detailed case descriptions from published academic literature, institutional reports, and official service platforms. Within this staged framework, the analysis focuses on two intertwined dimensions at each phase: the evolution of the library's core service functions and the transformative impact of AI empowerment. Through a comparative cross-case analysis, this study examines how specific AI technologies augment traditional services, fundamentally changing the role and value proposition of libraries. [Results/Conclusions] The results show that through intelligent information analysis, knowledge association, data mining, and precise matching, AI can promote university libraries to shift from resource supply-oriented support to collaborative services that run through the entire lifecycle of technology transfer. This transformation manifests across the four-stage lifecycle as a shift: from providing literature to forecasting opportunities at the initiation phase; from offering patent data to navigating R&D pathways and risks during development; from archiving outputs to assessing value and potential at conclusion; and from disseminating information to intelligently brokering industry partnerships at the commercialization phase. Synthesizing these stage-specific transformations, this study constructs a novel, integrated service framework. This framework explicitly links specific AI capabilities with the redefined core functions of the library at each stage, illustrating the transition from a linear support model to a dynamic, AI-augmented ecosystem wherein the library serves as a central intelligence node. Meanwhile, this study reveals practical challenges in current practices, including ambiguous organizational boundaries, insufficient professional capabilities, and imperfect evaluation mechanisms oriented toward technology transfer. Correspondingly, it proposes strategies such as clarifying collaborative positioning, strengthening the construction of AI-empowered service capabilities, and improving technology transfer-oriented evaluation mechanisms to promote the sustainable development of AI-empowered research services in university libraries.

  • SONGLingling, ZHANGXinghui
    Journal of library and information science in agriculture. https://doi.org/10.13998/j.cnki.issn1002-1248.25-0524
    Accepted: 2026-01-21

    [Purpose/Significance] This study investigates the operational practices and strategic development pathways of intelligent consultation services in libraries globally, specifically under the impetus of artificial intelligence (AI) large language models (LLMs). By conducting a systematic analysis of representative case studies, we examine the applied technologies, emerging service models, and measurable efficacy of these AI-enhanced services. The research holds significance in offering actionable insights for the effective implementation of AI within the library sector. It aims to guide the evolution of intelligent consultation toward greater innovation and cultural-contextual adaptability, thereby providing both theoretical underpinning and practical guidance for the localized development of smart library ecosystems. [Method/Process] Employing a comparative case study methodology, this research selected 30 representative libraries from diverse international and domestic contexts as its subjects. Data were primarily gathered through structured online surveys and content analysis of publicly available service interfaces, systematically capturing the scope, functionality, and operational status of their intelligent consultation services. The analysis focused on characterizing technological applications-identifying core LLM integrations, typical functionalities, and architectural highlights. It further integrated findings to compare and contrast prevailing service models and implementation variances. Subsequently, the study conducted a multidimensional comparative assessment of the practical service effectiveness enabled by AI large models, evaluating performance across four key areas: service response efficiency and accuracy; capabilities in resource organization and structured knowledge management; tangible improvements in user service experience; and degree of service model innovation. [Results/Conclusions] The findings indicate that AI large model-driven intelligent consulting services exhibit pronounced advantages in key operational metrics, including enhanced response efficiency, superior knowledge synthesis and management capabilities, enriched user interaction experiences, and the facilitation of novel service paradigms. However, a comparative analysis reveals significant disparities among libraries concerning the depth of technological integration, the sophistication of service offerings, and the level of cultural and linguistic adaptation achieved. In response, the study proposes targeted strategic recommendations from three interrelated perspectives: nuanced technological application, user-centered service design, and collaborative ecosystem construction. It advocates for libraries to prioritize the synergistic balance between technological capability and humanistic service values, to achieve deeper integration with localized and institutional knowledge repositories, and to institute mechanisms for continuous service evaluation and iterative optimization. These approaches are essential for fostering more efficient, inclusive, and sustainable development of intelligent consultation services. Future research directions should encompass longitudinal studies on service effectiveness, the integration of multimodal interactive capabilities, and the formulation of ethical guidelines and governance frameworks for AI deployment in library services.

  • GUO Jinbo
    Journal of library and information science in agriculture. https://doi.org/10.13998/j.cnki.issn1002-1248.25-0593
    Accepted: 2026-01-20

    [Purpose/Significance] With the rapid integration of generative artificial intelligence into library services, user trust in information has begun to exhibit a new pattern characterized by high usage, low certainty, and increased reliance on institutional guarantees. Existing studies on online credibility, artificial intelligence generated content (AIGC) applications and library innovation have mostly examined either technical performance, information literacy, or governance issues in isolation. Few have systematically analyzed how specific AIGC features, user capabilities and the institutional environment of libraries jointly shape multi dimensional user trust. This study focuses on AIGC supported services in public and academic libraries and constructs a comprehensive analytical framework linking technological signals, user ability and library based institutional mediation to the formation of cognitive, emotional and behavioral trust. The paper contributes to the refinement of trust theory in digital information environments by providing empirical evidence from a large-scale sample in China. It also offers actionable insights for libraries seeking to deploy AIGC while maintaining or enhancing their role as trusted public knowledge institutions. [Method/Process] The study is grounded in classic research on cognitive authority and online credibility, and combined with recent work on AIGC, knowledge services, information literacy and library governance. It conceptualizes user trust as a three dimensional construct comprising cognitive, emotional and behavioral components. AIGC related technological features are operationalized along three axes: perceived content quality, generation transparency and interactivity. User capability is measured through standardized digital literacy tests and indicators of professional background, while the library environment is captured by the presence of institutional arrangements such as usage guidelines, staff verification, result labelling and risk reminders. Data were collected through a large-scale questionnaire survey in ten public and academic libraries in Henan Province, yielding 2 347 valid responses. After data cleaning and reliability and validity checks, the study employed a combination of structural equation modelling, two stage least squares estimation, threshold regression, spatial autoregressive models, dynamic panel system GMM estimation, quantile regression and finite mixture models. This sequential strategy allowed for simultaneous identification of structural paths, endogenous relationships, non linear and moderating effects, spatial spillovers and temporal dependence, as well as heterogeneous trust formation patterns across user groups. [Results/Conclusions] The findings confirm that user trust in AIGC enabled library services is best understood as a three dimensional structure, in which cognitive trust influences emotional trust, and both jointly shape behavioral trust. Content quality and generation transparency exert strong and robust positive effects on cognitive trust, while interactivity mainly enhances emotional trust and indirectly affects behavioral intentions. Digital literacy and professional background introduce clear threshold and amplification effects: when user capability is below certain levels, improvements in content quality and transparency have limited impact on trust, but above these thresholds the marginal effects increase markedly. Library level institutional arrangements, including human review, explicit labelling and standardized usage rules, not only raise overall trust levels, but also significantly strengthen the effects of technological signals, sometimes to a degree comparable with individual level capability factors. Spatial and dynamic analyses show that trust exhibits both spillover and path dependence: practices in one library can influence neighbouring institutions through user mobility and word of mouth, and positive or negative experiences accumulate into longer term evaluations. The study suggests that libraries should treat trust building as a core design objective when introducing AIGC, embed transparency and quality signals into interfaces and metadata, establish robust verification and correction workflows, and provide differentiated services for users with different literacy levels and professional backgrounds. The limitations include the concentration of data in one province and the use of primarily macro-level instruments for identifying causation. Future research could extend the framework to cross regional and cross type libraries, compare specific functional scenarios such as reference services and reading promotion, and further integrate trust analysis with broader issues of library governance, literacy education and responsibility allocation in AIGC ecosystems.

  • WANG Jian
    Journal of library and information science in agriculture. https://doi.org/10.13998/j.cnki.issn1002-1248.25-0708
    Accepted: 2026-01-20

    [Purpose/Significance] The effective flow of agricultural knowledge from innovation sources to fields is a core component of agricultural modernization. However, a persistent "structural knowledge gap" exists between macro-level knowledge supply and the micro-level needs of farmers, which traditional top-down extension systems often fail to bridge due to issues such as information decay, a lack of feedback, and poor contextual adaptation. In the context of promoting the high-quality development of rural public cultural services, grassroots reading spaces (e.g., rural libraries and village reading rooms) face a critical imperative to evolve beyond their traditional role as static repositories of books. This study reimagines grassroots reading spaces as dynamic "knowledge nodes" within rural socio-information ecosystems. The primary significance of this research lies in its innovative integration of public governance and knowledge management theories to construct a novel "node-interface-flow" analytical framework. It moves the discourse forward from predominant concerns with resource allocation or technology access to a deeper investigation of how internal governance mechanisms fundamentally shape these spaces' capacity to process and diffuse knowledge. By doing so, it positions the study at the intersection of rural studies, public administration, and knowledge science, offering a refined theoretical lens to understand and design rural knowledge infrastructure. Its practical importance stems from providing evidence-based, mechanistic explanations and actionable pathways for transforming these ubiquitous facilities from venues of "cultural provision" into active agents of "knowledge empowerment" for rural communities. [Method/Process] To uncover the mechanisms through which collaborative governance influences knowledge flow, this study employed a sequential explanatory mixed-methods design (QUAN → QUAL). The research was empirically grounded in a comparative case study of three rural reading spaces in China, deliberately selected through theoretical sampling to represent three distinct ideal-typical governance models: Jiangyin (exemplifying a deep contractual model involving long-term institutional agreements between local government and a vocational college), Liancheng (representing an administrative-dominant model operating within a standardized county-branch library system), and Yuhang (illustrating a social collaborative model based on government-purchased services from local social organizations). The methodological appropriateness of this multi-case comparative approach lies in its capacity to maximize variation in the key independent variable (governance model) while controlling for contextual factors, thereby allowing for clearer causal inference regarding the model's impact. Data were collected from March to August of 2024. The quantitative phase involved a structured questionnaire survey administered to 438 farmers across the villages served by the three case spaces (from 480 distributed, 91.3% valid response rate). The survey instrument was designed to measure key variables derived from the theoretical framework, including perceived interface quality (e.g., resource relevance, expert accessibility), knowledge acquisition, community knowledge sharing, and technology adoption intention. Reliability and validity tests (e.g., Cronbach's α, K-R20) confirmed the robustness of the measures. The subsequent qualitative phase comprised 38 in-depth, semi-structured interviews with space managers, active farmers, and key partners, supplemented by participatory observation and archival analysis. This phase aimed to provide rich, contextual insights into the operational mechanisms linking governance rules, interface functioning, and knowledge flow patterns. Quantitative data were analyzed using SPSS for ANOVA and regression analysis to test performance differences and mediation effects, while qualitative data were thematically coded using NVivo to elucidate underlying processes. [Results/Conclusions] The findings confirm the proposed "governance model → interface characteristics → flow efficacy" mechanism. The deep contractual model, through its "embedded interface," successfully couples strong formal institutional guarantees (e.g., mandated expert deployment, resource co-selection) with derived informal trust relationships from long-term embeddedness. This combination significantly drives the deep, closed-loop flow of highly complex, codified knowledge, completing cycles from external input to local application and feedback. In contrast, the social collaborative model's "networked interface," characterized by vibrant informal community networks activated by skilled social organizers, proves far more effective in stimulating the horizontal sharing, exchange, and co-creation of tacit knowledge within the community. The administrative-dominant model, with its standardized formal interface and underdeveloped informal connections, demonstrates limited efficacy, often resulting in interrupted, one-way knowledge flow. Based on these insights, the study proposes a two-dimensional model of "institutional depth" versus "networked breadth" to describe the unique effectiveness of different governance models. Based on these empirical results, three concrete policy and management recommendations have been proposed to foster responsive rural knowledge nodes: 1) shifting performance evaluation and resource allocation from static input metrics towards a focus on dynamic "interface capability"; 2) designing and institutionalizing specialized "knowledge broker" programs to staff these interfaces with trusted, skilled intermediaries; and 3) initiating collaborative "local knowledge repository" projects to systematically capture, digitize, and valorize indigenous community wisdom. The study acknowledges limitations regarding the generalizability of findings from a three-case comparison and suggests future research directions, including longitudinal studies to observe interface evolution, social network analysis to precisely map relational structures, and exploration of how digital "smart interfaces" might integrate with the social interfaces examined here to create new paradigms for rural knowledge service.

  • GUO Yanli, GAO Rui, ZOU Meifeng, LIU Zidan
    Journal of library and information science in agriculture. https://doi.org/10.13998/j.cnki.issn1002-1248.25-0663
    Accepted: 2026-01-20

    [Purpose/Significance] As the user base grows, the number of online comments is increasing rapidly. The massive volume of comments has broadened the innovative thinking of enterprises and provided more diverse innovative options, but it has also brought about the problem of information overload. Therefore, in the face of the massive amount of online user comments, how to use efficient and precise methods to mine information with practical value, effectively integrate valuable information and identify product innovation opportunities, and transform it into high-quality resources for enterprise product innovation has become a hot topic of great concern in both academic and industrial circles. Against this backdrop, studying how to identify product innovation opportunities based on online reviews is of great theoretical significance and practical value. Unlike previous studies, this paper uses the BERT model to accurately filter out negative user comments and identify key demand points. This article also combines the characteristics of ordinary users and leading users, integrates dual-source data of user comments from e-commerce platforms and online communities, and associates the demand issues of ordinary users with the suggestions of leading users, which can more accurately identify product innovation opportunities. [Method/Process] First, we collected and pre-processed ordinary user comment data and leading user comment data. Second, the BERT model and LDA topic model were used to categorize the sentiment and cluster the comment data to mine the problems of ordinary users and suggestions of leading users. Finally, based on semantic similarity analysis, problem-suggestion topic mapping was realized to identify product innovation opportunities with high innovation value. [Results/Conclusions] This paper constructed a problem-suggestion product innovation opportunity identification method driven by dual-source data, and selected the action camera as a case to elaborate in detail on the specific practice of the proposed method in the field of product innovation. Through case analysis, the feasibility of the proposed method of product innovation was verified, providing an operational reference basis for enterprises on how to efficiently recommend product innovation work. However, this paper still has certain limitations and needs to be improved with more abundant data in subsequent studies. First, the data collected in this article mainly come from e-commerce platforms and online community platforms. Although this data contain a large amount of user information, there are still deficiencies. In the future, we will introduce more data sources, such as news media and technology websites to obtain more comprehensive and diverse data. Second, this paper has only conducted case application research in the field of intelligent digital products. In the future, we need to further explore more fields, such as smart wearables and whole-house intelligence, to enhance the universality of the product innovation opportunity identification framework constructed in this paper.

  • YANG Min
    Journal of library and information science in agriculture. https://doi.org/10.13998/j.cnki.issn1002-1248.25-0581
    Accepted: 2026-01-08

    [Purpose/Significance] Seoul Outdoor Library has not only gained recognition from Seoul citizens, but has also received awards from the International Federation of Library Associations and Institutions (IFLA) for two consecutive years. Since its opening, it has served 8 million users, with a user satisfaction rate of 96.6%. Moreover, it attracts the attention of the library industry both domestically and internationally. Based on this, this paper extracts replicable and scalable practical experiences and insights from the successful case of Seoul outdoor library. Its research significance lies in both addressing the dilemma of "practice taking precedence over theory" in outdoor libraries, filling the academic research gap in this field, and providing practical guidance for the long-term, high-quality development of outdoor libraries in China. [Method/Process] The research conclusions drawn from single case study methods often possess greater enlightenment and relevance to reality. Based on this, the paper analyzes the basic situation of Seoul Outdoor Library through a single case study method. Moreover, the paper adopts the "triangulation verification" multi-source data collection method to enhance the validity and reliability of the research. We found that the main service contents include book reading services, space services, art literacy education, tourism information services, and policy display and promotion services. In addition, Seoul Outdoor Library exhibits green integration and sustainability in its design, flexibility and decentralization in spatial characteristics, openness and flexibility in scene characteristics, and emphasizes interaction and human-centered service. The innovative value of Seoul Outdoor Library is reflected in the coexistence of low-cost space supply and high satisfaction, deepening the connection between libraries and public affairs, and the organic integration of social and economic benefits. [Results/Conclusions] The paper holds that the development of outdoor libraries in China should start with several aspects. Firstly, outdoor libraries should be based on observation to promote the "rediscovery of libraries" initiative. For example, outdoor libraries rediscover the new value of space, the new role of librarians, and the new connotation of resources. Secondly, outdoor libraries should be endowed with values and infused with soul, making full use of local resources to endow them with spiritual cores. Thirdly, outdoor libraries should shape their output, and optimize scene construction. Finally, outdoor libraries should nourish the heart through implementation, deeply cultivate emotional experiences, and allow users to feel a sense of belonging through humanistic details. Of course, the paper inevitably has limitations. Future research will expand case samples to gain a more comprehensive understanding of outdoor libraries and facilitate their high-quality development in China.

  • HAOYali, LIANGYing, DINGRuoxi
    Journal of library and information science in agriculture. https://doi.org/10.13998/j.cnki.issn1002-1248.25-0651
    Accepted: 2026-01-06

    [Purpose/Significance] With the continuous advancement of national governance modernization and the rapid development of artificial intelligence (AI) technologies, emotional-functional embodied intelligence has become integral to grassroots social governance. This development not only reshapes traditional governance tools but also triggers profound reflections on the balance between instrumental rationality and value rationality. In this context, systematically examining the internal mechanisms and potential risks associated with the integration of emotional-functional embodied intelligence into social governance can provide both theoretical enrichment and practical guidance for technology-enabled governance modernization. [Method/Process] Based on Max Weber's "tool-value" dichotomy, this study focuses on key issues concerning the influence mechanisms, risk boundaries, and regulatory pathways of emotional-functional embodied intelligence in social governance. By situating the analysis within concrete scenarios of its embedding in social governance practices, the research combines theoretical reflection with contextual examination to explore how emotional-functional embodied intelligence reshapes governance structures and processes. [Results/Conclusions] The findings reveal that AI, embodied intelligence, and emotional-functional embodied intelligence differ significantly in terms of technological architecture, functional form, and modes of integration into social governance. While AI optimizes decision-making through data empowerment and embodied intelligence delivers services through physical interaction, emotional-functional embodied intelligence achieves full-process and in-depth integration into social governance by relying on affective linkage. It forms an integrated structural system composed of the demand, intelligence, action, and support layers, thereby enabling coordinated governance operations that combine rational decision-making with emotional interaction. Through three core mechanisms - intelligence embedding, human-machine coupling and feedback-driven iteration, emotional-functional embodied intelligence is able to simultaneously accomplish rational decision-making tasks and emotional interaction objectives. However, the embedding of emotional-functional embodied intelligence in social governance also implies dual structure of risks. On one hand, it may amplify traditional risks inherent in AI technologies, such as algorithmic dependence and blurred responsibility attribution. On the other hand, it may generate new forms of context-specific risks, including emotional-cognitive alienation, value-guidance deviation, and the reconstruction of governance authority. To address these challenges, it is necessary to construct a full-chain regulatory framework for accountability and establish full-process technological safeguards encompassing ex-ante prevention, in-process monitoring, and ex-post traceability. Concurrently,it's essential to articulate value-oriented principles for emotion-informed governance and clarify a human-machine collaborative governance framework in which human actors retain primary authority while intelligent technologies play an auxiliary role. Through these coordinated measures, effective risk regulation and rational balance can be achieved in the application of emotional-functional embodied intelligence in social governance.

  • PANYong, SUNJing, WANGJiandong
    Journal of library and information science in agriculture. https://doi.org/10.13998/j.cnki.issn1002-1248.25-0664
    Accepted: 2026-01-06

    [Purpose/Significance] As data become a strategic resource in the digital economy, its quality directly affects the efficiency of value creation and the effectiveness of public governance. However, with the continuous expansion of data scale and the deepening of application scenarios, pervasive quality issues - such as inconsistencies, errors, and redundancies - have emerged as a significant bottleneck restricting the release of data element potential. High-quality public data are particularly critical for empowering government decision-making and optimizing public services. Addressing the urgent practical need for high-quality data supply, this paper relies on the public basic databases (specifically the Population Database and Legal Entity Database) of a representative city to construct a scientific, systematic, and operable data quality assessment system. The study aims to diagnose existing quality defects in these foundational assets and provide theoretical support and actionable references for relevant departments to transition from passive data management to active quality governance. [Method/Process] To ensure the assessment is both scientifically rigorous and practically applicable, this study establishes a comprehensive evaluation framework based on domestic and international research, combined with the national standard GB/T 36344-2018 and local data characteristics. The framework comprises a hierarchical structure with 6 primary indicators (Normativity, Integrity, Consistency, Accuracy, Timeliness, and Accessibility), 17 secondary indicators, and 61 specific detection items. The study employs a dual-track assessment methodology integrating automated detection tools with manual verification. Automated SQL scripts and rule engines are utilized for the large-scale quantitative detection of intrinsic dimensions, while manual checks and interviews address contextual dimensions. This methodology was applied to conduct a multi-dimensional evaluation of 1 367 data items across 102 datasets in the city, ensuring a thorough analysis of the data status. [Results/Conclusions] The evaluation results indicate that while the overall construction of the city's public basic databases is positive, multidimensional quality issues persist. Specifically, the assessment revealed problems such as data coding errors, non-standardized classification, missing data items, missing or duplicate primary keys, inconsistent formats, the presence of illegal characters or outliers, and data delays or discontinuations. To address these challenges, the paper proposes four systematic improvement strategies: 1) To unify data standards and coding systems to ensure consistency across departments; 2) To construct a full-process quality control mechanism covering data collection, storage, and usage; 3) To strengthen technical platform support by implementing real-time monitoring and intelligent warning capabilities; and 4) To improve organizational synergy and institutional guarantees to solidify the management foundation. These measures are intended to optimize data supply quality and support the support the high-quality and sustainable development of the data element market.

  • YUAN Shuo
    Journal of library and information science in agriculture. https://doi.org/10.13998/j.cnki.issn1002-1248.25-0526
    Accepted: 2025-12-31

    [Purpose/Significance] The accelerated digital transformation of public cultural services has fundamentally reshaped modes of service delivery, governance frameworks, and citizen engagement. Exploring how digital technologies empower the high-quality development of public cultural services is essential for designing a modern, equitable, and efficient service system. This study contributes to the existing literature by systematically investigating not only the direct effects of digital technologies but also threshold, regional heterogeneity, spatial spillovers, and mediating mechanisms. This clarifies how digital innovation interacts with governance capacity and institutional environments. Unlike previous research, which relied mainly on descriptive or single-method analyses, this study employs an integrated empirical framework. This framework captures the dynamic and multidimensional nature of digital empowerment within the context of public service. It enriches the theoretical and practical understanding of digital governance. [Method/Process] This study employs panel data from 31 Chinese provinces over the period 2015-2023 to systematically investigate how digital technologies influence the high-quality development of public cultural services. A combination of fixed-effects models, mediating-effects models, threshold regression models, and spatial econometric models was used to capture direct, nonlinear, regional, spatial, and mediating effects. To control for potential confounding factors, fiscal expenditure, population density, and cultural literacy were incorporated as covariates. The analysis drew on theoretical foundations conceptualizing digital technology as a new productive force and was supported by empirical data from national statistical yearbooks, digital finance indices, and governance performance indicators, ensuring both methodological rigor and contextual relevance. [Results/Conclusions] Digital technology significantly promotes the high-quality development of public cultural services, with measurable positive effects for each incremental increase in the digital technology development index. The influence exhibits a nonlinear threshold pattern, reflecting a "promotion-weakening-enhancement" trajectory, highlighting the necessity of integrating technological applications with governance structures, resource allocation, service design, and public digital literacy. Regional analyses reveal stronger effects in the central and western provinces, suggesting that digital technologies can help mitigate service disparities under supportive policy frameworks. The spatial econometric results indicate positive spillover effects on neighboring regions, while the mediation analysis identifies government governance capacity as a key mechanism through which technological inputs translate into service outcomes. Policy implications include reinforcing digital infrastructure, enhancing institutional support, implementing region-specific strategies, fostering inter-provincial coordination, and strengthening government-led service integration. The study has limitations, including the possibility of potential unobserved concurrent causal pathways, Future research should adopt configurational methods such as qualitative comparative analysis in future research to further elucidate the complex, multicausal dynamics of digital technology empowerment in public cultural services.

  • WU Yuhao, LIU Yihao, LI Qingjun, HU Xu
    Journal of library and information science in agriculture. https://doi.org/10.13998/j.cnki.issn1002-1248.25-0436
    Accepted: 2025-12-31

    [Purpose/Significance] Under the background of the digital economy, problems such as the difficulty in integrating multi-source heterogeneous data, low efficiency in matching supply and demand, and imbalance between security and openness in library data opening and sharing have restricted traditional technologies and service models from breaking through the bottlenecks. Large language models (LLMs) offer a new path to break through this predicament. This study aims to improve the theoretical system of technology that empowers the open sharing of library data. It also aims to fill the gap in existing research, which mostly focuses on general technologies and lacks systematic adaptation to library scenarios. Additionally, this study aims to provide theoretical and practical support for libraries to transform from data custodians to knowledge enablers, which will support the high-quality development of the industry. [Method/Process] Based on the elaboration of the practical impact of LLMs on the open sharing of library data, this paper analyzed the connotation, essence and characteristics of library data open sharing empowered by LLMs Based on this, the internal logic of LLMs driving the open sharing of library data was discussed, and the implementation path was explored. [Results/Conclusions] The open sharing of library data based on LLMs is manifested as a hierarchical leap in the value of data elements from basic integration, demand matching to decision support. This process needs to be efficiently advanced through human-machine collaboration on the supply side, user participation on the demand side, and cross-domain linkage on the ecosystem side. It should run through the entire life cycle of data production, governance, circulation, and application. Based on this, four guarantee strategies were proposed. In terms of technical architecture, we should adopt the "general model + domain fine-tuning" mode to adapt to the characteristics of library data. Efforts should be devoted to establishing a full-process quality control and hierarchical desensitization mechanism in data governance. In terms of talent cultivation, we should build a "business + discipline + technology" compound team. In terms of ethical construction, a full-process review and user rights protection system should be established. In the future, it is possible to further explore the in-depth adaptation of LLMs with the special collection resources of libraries, as well as the construction of a dynamic and elastic security governance framework, to promote the ecological development of industry data openness and sharing.

  • YI Chenhe, ZHANG Yuting
    Journal of library and information science in agriculture. https://doi.org/10.13998/j.cnki.issn1002-1248.25-0495
    Accepted: 2025-12-31

    [Purpose/Significance] Generative Artificial Intelligence (GAI) has rapidly reshaped the landscape of social information dissemination, bringing unprecedented network public opinion risks-such as large-scale disinformation spread, algorithmic bias-induced social inequality, extreme emotional polarization, and model hallucinations leading to cognitive deviations-that significantly amplify the complexity, suddenness, and cross-domain spillover effects of public opinion evolution. These risks not only undermine the authenticity and order of information ecosystems but also pose severe challenges to social governance, public trust, and policy-making efficiency, making accurate identification, quantitative assessment, and early warning an urgent academic and practical task. Existing research has obvious limitations: single-dimensional assessment frameworks fail to capture GAI's multi-faceted and interrelated risks, such as the concealment of generated content, algorithmic recommendation amplification and cross-platform diffusion; traditional models such as basic BP neural networks suffer from susceptibility to local optima and poor generalization, inadequately adapting to the non-linear, dynamic, and high-dimensional attributes of GAI-generated content. To address these gaps, this study constructed a 4-dimensional risk assessment index system (content, dissemination, sentiment, and user) and proposed a GA-optimized BP neural network model, which will enrich public opinion management theories in the AI era and provide practical, efficient tools for precise risk control. It will contribute to the construction of a safe, orderly, and trustworthy online space. [Method/Process] A mixed research method with solid theoretical foundations (information communication theory and intelligent optimization algorithms) and empirical support was adopted: Ten typical GAI-induced public opinion events were selected from Sina Weibo (selection criteria: views ≥1 million, original posts ≥60, covering technology, society, public affairs, and consumption fields). Following a four-stage evolutionary model (formation, outbreak, mitigation, and recovery) and four early warning levels (Level I-IV, corresponding to binary outputs 1000, 0100, 0010, 0001) as specified in national emergency management standards, samples were systematically categorized into four evolutionary stages and corresponding risk grades. A 12-indicator system covering content (authenticity, misleadingness, and professionalism), dissemination (speed, scope, and diffusion path), sentiment (intensity, polarization degree, and negative ratio), and user (influencing impact, participant activity, and interaction stickiness) dimensions was constructed. The weights of each indicator were determined to ensure objectivity, and data preprocessing was performed via min-max normalization to eliminate dimensional differences. A 4-layer BP neural network (12 input neurons, 2 hidden layers with 15 and 10 neurons respectively, and 4 output neurons) was built, with initial weights, thresholds, and hyperparameters (learning rate and iteration times) optimized by genetic algorithm (GA). A traditional BP model served as the control group, with 70% of data as the training set and 30% as the test set, and model performance was evaluated based on prediction accuracy. [Results/Conclusions] Experimental results confirm the significant superiority of the GA-BP model: its prediction accuracy reached 91.67%, 8.34 percentage points higher than the traditional BP model (83.33%). This verifies that GA optimization effectively improved model performance, enabling better capture of complex non-linear relationships among GAI-induced risk factors. The multi-dimensional index system successfully extracted core risk characteristics, realizing comprehensive identification and traceability of GAI-related public opinion risks. Limitations of this study include sample concentration on Chinese social platforms, limited case quantity, and narrow time span. Future research will expand cross-border, multi-language samples (e.g., Twitter, Facebook), enrich technical indicators (e.g., GAI content identifiability, algorithmic intervention intensity), and explore integration with deep learning models (e.g., LSTM, Transformer) to further enhance the generalizability, real-time performance, and intelligent decision-making support capabilities of the risk assessment system.

  • HUANGXiaotang, YAOQibin
    Journal of library and information science in agriculture. https://doi.org/10.13998/j.cnki.issn1002-1248.25-0590
    Accepted: 2025-12-19

    [Purpose/Significance] Under the strategic background of national cultural digitization and the high-quality development of public services, artificial intelligence generated content (AIGC) has become a core engine driving the digital and intelligent transformation of galleries, libraries, archives, and museums (GLAM). While AIGC offers unprecedented opportunities for content production and knowledge dissemination, current implementations often suffer from fragmentation, leading to new "data islands" and service barriers. Unlike previous studies, which treat GLAM institutions as a homogeneous whole, this paper aims to clarify the differentiated application paths of AIGC by distinguishing the unique "resource-technology-service" logic of each institution type. It seeks to reveal the structural causes of current collaborative dilemmas and construct a systematic collaborative development mechanism. This research is significant for breaking down institutional barriers, promoting the deep integration of cultural resources, and guiding GLAM institutions to shift from isolated technological upgrades to a clustered, symbiotic development model. [Method/Process] Adopting a digital ecosystem perspective, this study constructs a "Resource Attributes - Technology Adaptation - Service Goals" framework to systematically analyze the heterogeneous characteristics of the four institution types. The analysis reveals how distinct data morphologies - ranging from structured texts in libraries and semi-structured records in archives to multimodal artifacts in museums and unstructured works in art galleries - fundamentally dictate the differentiated deployment of generative text or vision models. By examining core capabilities including intelligent content twinning, editing, and creation, the study demonstrates how service goals strictly regulate technical choices: the emphasis on "access" and "trust" in libraries and archives necessitates technologies that ensure semantic accuracy and historical authenticity, whereas the pursuit of "experience" and "creativity" in museums and art galleries favors generative tools for immersive interaction and open-ended aesthetic expression. [Results/Conclusions] To address the identified challenges of fragmented development, the study proposes a tripartite collaborative development architecture consisting of a "Front-end Resource Layer," a "Mid-platform Technology Layer," and an "End-user Service Layer." The Front-end Resource Layer focuses on constructing a unified multimodal data foundation and standardized semantic ontology to bridge the semantic gap between heterogeneous institutional data. The Mid-platform Technology Layer advocates for the co-construction of an industry-specific general large model and a knowledge reasoning engine; by sharing API interfaces and computing power, this layer solves the high technical threshold and cost issues for smaller institutions, acting as a ubiquitous "industry capability hub." The End-user Service Layer aims to build a one-stop knowledge exploration portal and cross-domain expert workbenches, eliminating service isolation and creating integrated cultural scenarios. The study concludes that GLAM institutions must transition from "cultural containers" to "knowledge engines" through this architecture. Future research should further focus on copyright ethics, algorithmic governance, and new modes of human-machine collaboration to ensure the sustainable and trustworthy development of the digital cultural community.

  • LIDan, FENGDanran
    Journal of library and information science in agriculture. https://doi.org/10.13998/j.cnki.issn1002-1248.25-0493
    Accepted: 2025-12-19

    [Purpose/Significance] Against the backdrop of intensifying global technological competition and the drive for scientific and technological progress under national innovation strategies, generative artificial intelligence (AI) technology, as an emerging disruptive technology, has had a profound impact on the economy and society through its widespread application. However, the diffusion of this technology in the market still faces numerous challenges. This paper aims to delve into the micro-level decision-making factors influencing enterprises' research and development (R&D) of generative AI technology, as well as the specific impact of user group interactions on the effectiveness of technology diffusion, by constructing a complex network evolutionary game model. The research seeks to uncover the inherent laws governing technology diffusion, providing a scientific basis for policymakers and corporate practitioners to promote the healthy development and effective diffusion of generative AI technology, thereby fostering comprehensive socio-economic progress. [Method/Process] This paper adopts the complex network evolutionary game model as the primary research method, integrating complex network theory, technological innovation diffusion theory, and social influence theory to construct a game model for corporate decision-making regarding generative AI technology. By incorporating the structural characteristics of complex networks and the dynamic mechanisms of evolutionary games, the study simulates the R&D decision-making processes of enterprises under varying conditions of user adoption rates, government subsidy levels, differences in technology benefits and costs, and technology spillover effects. Simultaneously, numerical simulation analysis is employed to explore the specific impacts of changes in these factors on the diffusion effectiveness of generative AI technology decisions, thereby thoroughly revealing the micro-mechanisms underlying technology diffusion. [Results/Conclusions] The research results indicate that an increase in user adoption rates significantly and positively drives the diffusion of generative AI technology, with moderate user dependency behaviors further accelerating this process. Government subsidies play a particularly prominent role in promoting technology diffusion when user adoption rates and the initial proportion of enterprises choosing R&D strategies in the network are low. However, as these proportions rise, the marginal effect of subsidies gradually diminishes. The difference in benefits between enterprises that develop generative AI technology and those that do not has a marked impact on technology diffusion, whereas the impact of cost differences is relatively minor. Furthermore, the spillover effects of generative AI technology may induce free-rider behaviors among other enterprises, hindering technology diffusion. Additionally, when the maturity level of generative AI technology is low, it reduces user trust in the technology, thereby inhibiting its widespread dissemination. Based on these conclusions, this paper proposes policy recommendations such as encouraging user participation, flexibly adjusting subsidy policies, enhancing technology maturity, and establishing intellectual property laws and regulations to facilitate the effective diffusion of generative AI technology.

  • HEYing, SUNWei, LIZhoujing, MAXiaomin
    Journal of library and information science in agriculture. https://doi.org/10.13998/j.cnki.issn1002-1248
    Accepted: 2025-12-12

    Purpose/Significance The formulation of evidence-based science and technology policy critically relies on the timely and accurate provision of relevant, high-quality evidence. However, current evidence recommendation practices often suffer from significant limitations in both accuracy and efficiency, hindering the scientific rigor and intelligent application of evidence within the policy-making process. These shortcomings hinder policymakers' ability to leverage the most pertinent research and data, potentially leading to suboptimal decisions. Addressing this critical gap, this research proposes a novel knowledge graph-based evidence recommendation method. The primary objective is to substantially enhance the scientific foundation and intelligent capabilities of evidence utilization during policy formulation. This method aims to empower policymakers by providing more reliable, contextually relevant, and efficiently retrieved data support. Ultimately this will foster more robust, transparent, and demonstrably effective science and technology policies grounded in comprehensive research insights. Method/Process To achieve these objectives, this study systematically constructs a domain-specific knowledge graph meticulously centered on the intricate citation relationships between policy documents and academic research papers. This graph serves as the foundational semantic network representing entities (policies, articles, topics, authors, institute etc.) and their multifaceted interconnections. Most importantly, we introduce and adapt the Knowledge Graph Attention Network (KGAT) algorithm n an innovative way. Leveraging KGAT's sophisticated graph attention mechanisms, our model effectively captures and learns complex, high-order semantic relationships between policy requirements (represented as queries or specific nodes) and potential evidence sources (research paper nodes). This deep relational understanding enables nuanced evidence relevance scoring and personalized recommendation. To rigorously validate the proposed method's practical efficacy and performance, we conducted an extensive empirical study within the specific domain of agricultural science and technology policy. Furthermore, to demonstrate real-world applicability and provide a tangible tool for policymakers, we designed and implemented a fully functional Evidence Intelligent Recommendation System (EIRS). This system seamlessly integrates the core KG-based recommendation engine and incorporates advanced intelligent analysis capabilities. Significantly, EIRS supports an end-to-end workflow initiated by natural language policy questions posed by users, enabling intuitive interaction and precise, demand-driven evidence retrieval and recommendation. [Results/ Conclusions Experimental results, conducted on real-world datasets within the agricultural science and technology policy domain, demonstrate the superior performance of the proposed KGAT-based recommendation method. It consistently outperforms several state-of-the-art baseline algorithms across multiple key evaluation metrics, including precision, recall, normalized discounted cumulative gain (NDCG), and mean reciprocal rank (MRR). This quantitatively confirms its significantly stronger recommendation capability. In addition to quantitative metrics, the model inherently offers enhanced explainability due to the transparent nature of the knowledge graph structure and the attention weights learned by KGAT, allowing for insights into why specific evidence is recommended, based on its semantic connections to the policy query. Concurrently, the implemented EIRS has proven to be highly effective in practice. It efficiently identifies and recommends evidence resources exhibiting a strong match with complex policy requirements expressed in natural language. The system's successful deployment underscores its potential to tangibly augment the scientific underpinning of science and technology policy development. By effectively bridging the gap between vast research knowledge and specific policy needs through intelligent, accurate, and explainable recommendations, this research provides a novel, practical pathway towards realizing truly intelligent and rigorously evidence-based policy formulation processes. The methodology and system prototype offer a valuable and adaptable framework for various policy domains beyond the presented case study.

  • MAOKaiyan
    Journal of library and information science in agriculture. https://doi.org/10.13998/j.cnki.issn1002-1248.25-0429
    Accepted: 2025-12-02

    [Purpose/Significance] Chinese classical texts are central to preserving and transmitting traditional culture; however, promoting them among children has long faced many obstacles: the linguistic barrier posed by classical Chinese, the cognitive distance caused by cultural discontinuity, and the limitations of static and monotonous promotional forms. These challenges have often resulted in low levels of engagement and comprehension among young readers. The recent emergence of Sora-type video generation models, characterized by their ability to produce coherent long-form narratives, integrate multimodal information, and simulate spatially consistent scenes, opens up new opportunities for bridging this gap. This study aims to investigate how such models can be effectively employed in the promotion of Chinese classics among children, to evaluate their potential benefits and inherent risks, and to develop practical strategies that align technological capabilities with educational and cultural objectives. [Method/Process] This research adopts a combined approach of literature review, case study, and comparative analysis. First, it reviews existing literature on the application of artificial intelligence in reading promotion, highlighting current achievements and limitations. Second, it uses representative Chinese classics, including Shan Hai Jing, Strange Tales from a Chinese Studio (Liaozhai Zhiyi), and The Book of Songs (Shijing), to examine how Sora-generated videos function in different promotional contexts. Third, it constructs an analytical framework based on three interrelated dimensions: scenes, content, and approaches. Within this framework, the study identifies opportunities, delineates challenges, and proposes targeted countermeasures. [Results/Conclusions] Sora-type video generation can substantially enhance the promotion of Chinese classics among children. At the scene level, it allows traditional spaces to be extended into immersive and hybrid environments, thereby broadening access beyond classrooms and libraries. At the content level, it transforms abstract imagery and complex narratives into visual forms, reducing cognitive barriers and accommodating differentiated learning needs. At the approach level, it facilitates text-image complementarity, cross-media integration, and personalized recommendations, thereby strengthening engagement and sustaining reading motivation. However, the study also cautions against significant risks. These include the mismatch between generated content and specific promotional settings, the danger of oversimplification or distortion of classical texts, and the over-reliance on audiovisual materials that might undermine children's ability to engage in deep textual reading. To address these risks, the article proposes a threefold strategy: differentiated scene design, content transformation with cultural fidelity, and complementary pathways that ensure children transition from video to text.

  • LIShuqi, LIJian
    Journal of library and information science in agriculture. https://doi.org/10.13998/j.cnki.issn1002-1248.25-0459
    Accepted: 2025-12-02

    [Purpose/Significance] Digital hoarding has emerged as a significant behavioral phenomenon in the digital age, particularly prevalent among social media users who engage in the excessive acquisition and retention of digital content. This behavior is further amplified by algorithmic recommendation systems that continuously personalize content delivery. Although existing research has examined individual psychological factors or platform characteristics using static approaches, it lacks a dynamic perspective to understand the co-evolutionary relationship between platform strategies and user behaviors. This study addresses this research gap by introducing evolutionary game theory as an innovative analytical framework. Theoretically, the significance lies in modeling the dynamic interactions between platforms' algorithmic adjustments and users' hoarding behaviors. This provides new insights into the adaptive mechanisms within socio-technical systems. From a practical standpoint, this research offers valuable implications for promoting healthier digital environments and developing sustainable governance models for platforms that balance commercial objectives with user well-being. [Method/Process] This study employs evolutionary game theory to model the dynamic interactions between social media platforms and boundedly rational users. This method is well-suited for analyzing how strategies co-evolve over time towards stable states. Based on literature from user behavior and platform economics, a game-theoretic model was developed. Numerical simulations in MATLAB analyzed evolutionary paths across four platform types (Instant Messaging, Public, Short Video, and Vertical Community), with the model calibrated against empirical typologies to investigate how key factors influence long-term outcomes. [Results/Conclusions] The simulation results reveal that the evolutionary path of the platform-user interaction system is highly sensitive to key parameters, ultimately converging to different evolutionarily stable strategies (ESS) under varying conditions. A principal finding is that a unilateral increase in algorithmic recommendation intensity by platforms, while potentially boosting short-term engagement, does not guarantee long-term benefits and may instead drive users towards non-hoarding strategies due to increased cognitive burden. Crucially, the reasonable regulation of recommendation intensity is identified as the key to achieving sustainable, positive interactions. Moderate algorithmic recommendations can effectively alleviate information overload, reduce the negative impacts of hoarding, enhance user experience and satisfaction, and ultimately increase long-term platform benefits, creating a win-win scenario. The study provides significant managerial implications, suggesting that platform operators should incorporate user well-being metrics into algorithm evaluation frameworks, moving beyond purely engagement-driven models. Differentiated governance strategies are recommended for various platform types, such as implementing intelligent filtering on instant messaging apps and content quality incentives on vertical communities. However, this study has limitations, primarily its assumption of user homogeneity, which overlooks the impact of individual differences in preferences and digital literacy. Future research should introduce user heterogeneity, explore multi-platform competition scenarios, and validate the model with empirical data to enhance its practical predictive power and application value.

  • HU Anqi
    Journal of library and information science in agriculture. https://doi.org/10.13998/j.cnki.issn1002-1248.25-0448
    Accepted: 2025-11-28

    [Purpose/Significance] The rapid proliferation of generative artificial intelligence (AI), exemplified by models like DeepSeek-R1, has precipitated a paradigm shift across various sectors, positioning AI literacy as an indispensable competency for the future workforce. University students, as digital natives and pivotal agents of technological adoption and innovation, stand at the forefront of this transformation. Their proficiency in understanding, utilizing, and critically evaluating AI technologies directly influences their academic performance, research capabilities, and long-term career adaptability. Although existing literature has begun to explore the conceptual landscape of AI literacy, a significant gap remains. There is an absence of a robust, empirically validated competency framework specifically tailored to the unique learning contexts, developmental needs, and future roles of university students within China's higher education system. This study aims to address this critical gap by constructing and validating a comprehensive AI literacy competency framework for college students. Its primary significance lies in its ability to move beyond theoretical discourse and provide an evidence-based model that can guide the systematical development of targeted training programs. This enriches the theoretical underpinnings of AI literacy education and offers practical guidance for cultivating high-quality talent equipped for the intelligent era. [Method/Process] This research employed a mixed-methods approach, integrating qualitative and quantitative methods to provide both theoretical grounding and empirical robustness. The study commenced with a qualitative phase utilizing the grounded theory methodology. A systematic analysis of 112 core academic publications (2019-2024) from databases such as CNKI and Web of Science was conducted. Through a rigorous process of open coding, axial coding, and selective coding, facilitated by NVivo11 software, we extracted 300 initial concepts, which were subsequently synthesized into 26 sub-categories and ultimately 4 main categories. This process resulted in the preliminary construction of a four-dimensional AI literacy competency framework. Following this, a quantitative phase was implemented to test and refine the framework. A detailed questionnaire was developed based on the identified dimensions and indicators. Utilizing a five-point Likert scale, the questionnaire measured 26 variables corresponding to the framework's sub-components. A total of 586 valid responses were collected from undergraduate students across universities in Jiangsu Province, China. The dataset was randomly split into two halves. The first subset (N=293) underwent exploratory factor analysis (EFA) using SPSS to uncover the underlying factor structure and assess the internal consistency reliability via Cronbach's alpha. The second subset (N=293) was subjected to confirmatory factor analysis (CFA) using AMOS to verify the hypothesized factor structure, evaluate model fit indices (e.g., CMIN/DF, CFI, TLI, RMSEA), and establish convergent and discriminant validity by examining average variance extracted (AVE) and composite reliability (CR). [Results/Conclusions] The empirical analyses strongly support the validity and reliability of the proposed competency framework. The EFA clearly identified four distinct factors that aligned perfectly with the predefined dimensions, with a total variance explained of 69.916% and all factor loadings exceeding 0.6. The CFA results demonstrated excellent model fit (CMIN/DF=1.921, CFI=0.950, TLI=0.943, RMSEA=0.056), confirming the structural integrity of the framework. Furthermore, all constructs exhibited high internal consistency (Cronbach's α>0.90) and satisfactory convergent (AVE>0.5, CR>0.7) and discriminant validity. The finalized framework, therefore, comprises four interconnected core dimensions: AI Cognition (encompassing knowledge of basic concepts, applications, value, and risks), AI Skills (covering practical abilities from tool usage and programming to critical evaluation and innovation), AI Ethics (emphasizing social responsibility, privacy, intellectual property, and legal compliance), and AI Thinking (fostering higher-order cognitive abilities like computational, critical, and systemic thinking). Based on this validated framework, the study proposes a systematic and multi-faceted training system. This system outlines clear training objectives, identifies key stakeholders (e.g., university libraries, teaching centers, schools, and external enterprises), designs layered training content and pathways corresponding to each dimension, and suggests implementation strategies focusing on faculty development, a comprehensive assessment and feedback mechanism, and the strategic integration of AI-related resources. The main limitation of this study is that the respondents of the questionnaire were primarily college students during the empirical test stage. Future research can include teachers, business employers, and AI experts to modify and improve the index weight and content of the competency framework from multiple perspectives. This can be done through the Delphi method, expert interviews, and other methods, so as to enhance the framework's authority and universality.

  • FENGLi, GUOBochi, GAOMian
    Journal of library and information science in agriculture. https://doi.org/10.13998/j.cnki.issn1002-1248.25-0444
    Accepted: 2025-10-29

    [Purpose/Significance] The rapid expansion of artificial intelligence generated content (AIGC) is transforming how intellectual property (IP) literacy is cultivated in universities. Conventional approaches, often constrained by disciplinary fragmentation, uneven teaching capacity, and time–space limitations, are increasingly misaligned with human-AI collaborative learning. Against this backdrop, IP literacy must integrate legal knowledge, ethical judgment, compliance awareness, and AI-enabled creative practice. This study clarifies the renewed connotations of IP literacy in the AIGC era, develops a theoretically grounded model of influencing factors, and examines how multiple educational conditions combine to generate high-level outcomes. By focusing on IP literacy rather than generic digital competence, the paper addresses a clear gap in existing research and offers a configuration-based understanding that links theory to implementable strategies for intelligent, student-centered IP literacy education. [Method/Process] Grounded in Activity Theory, the study developed a six-dimensional framework consisting of the following variables: teacher professional competence, AI-IP awareness, diversified educational support, role division, evaluation mechanisms, and AI resources. These variables were operationalized via a structured questionnaire. Fuzzy-set Qualitative Comparative Analysis (fsQCA) was then employed to identify conjunctural causality and equifinal pathways that extend beyond linear models. High-outcome configurations were achieved through variable calibration, truth-table analysis, and minimization. Robustness was confirmed by tightening the PRI consistency threshold from 0.80 to 0.85. The path structure, overall coverage, and overall consistency remained stable. [Results/Conclusions] Findings show that AIGC-enabled IP literacy emerges through multiple effective configurational paths, rather than a single dominant factor. Across high-outcome configurations, teacher professional competence, AI–IP awareness, and diversified educational support consistently function as core drivers that shape learning processes and outcomes. Evaluation mechanisms and AI resources act as complementary or substitutive conditions, reinforcing effectiveness under specific institutional and resource constraints. Three typical paths were identified: a path emphasizing practice generation coupled with collaborative organization; a path that integrates resource sharing with practice-oriented development; and a path highlighting collaborative division of labor and effective communication to compensate for limited technical supply. Together, these paths confirm the internal logic of the six-dimensional model and demonstrate that coordinated configurations, rather than isolated improvements, are necessary to optimize IP literacy education in AI-rich contexts. Practical implications include strengthening AI-oriented teacher development, embedding AI-IP awareness in curricula and supporting services, building cross-unit collaboration mechanisms, and aligning role division and process evaluation with available AI resources. Although the cross-sectional design and limited scope constrain generalizability, the results provide a theoretically grounded and empirically supported basis for developing intelligent, collaborative, and student-centered IP literacy systems and offer a foundation for future longitudinal and comparative research in AIGC-enabled higher education.

  • RENFubing, LUOYa
    Journal of library and information science in agriculture. https://doi.org/10.13998/j.cnki.issn1002-1248.25-0326
    Accepted: 2025-10-20

    [Purpose/Significance] In the era of widespread social media, network cluster behavior has emerged as a significant phenomenon that shapes online public opinion and collective action. Although existing research has thoroughly examined macro-level drivers and developed evolutionary stage models for network cluster behavior, there is still a significant gap in our understanding of the micro-level cognitive mechanisms that dynamically propel its evolution. Cognitive biases, which are inherent tendencies in human cognition, are amplified in online group interactions. This study specifically addresses this gap by adopting a cognitive bias perspective to investigate the evolution mechanism of network cluster behavior. It is crucial to focus on campus hot events as highly relevant and sensitive case studies. These events often involve students, parents, educational institutions, and the wider public, covering core issues such as campus safety, management disputes, teacher-student relations, and student rights. Their inherent emotional resonance, rapid dissemination within specific online communities, and potential for severe damage to reputation and social order necessitate deeper understanding. The core innovation and significance of this research lie in: 1) Systematically integrating cognitive bias theory to analyze the complete lifecycle evolution of network cluster behavior in campus events; 2) Empirically revealing how specific biases dynamically manifest and interact at various stages, shaping the trajectory of network cluster behavior; 3) Providing a richer theoretical framework for network cluster action theory; 4) Offering empirical evidence for formulating targeted governance strategies to mitigate risks associated with campus-related online crises, thereby promoting constructive online discourse and campus stability. [Method/Process] To rigorously investigate the core research question, this study employed the grounded theory methodology. Based on sustained high popularity rankings on the "Zhiwei Shijian" platform, ten representative campus hot events were systematically selected to ensure coverage of diverse campus issues. Extensive datasets of user comments related to these ten events were collected from the Sina Weibo platform, serving as the core empirical foundation. The data collection timeframe spanned the complete lifecycle of each event, from initial emergence to eventual subsidence. Following the grounded theory process, the collected textual data underwent a meticulous three-stage coding procedure to induce and refine textual themes. Through this process, facilitated by qualitative data analysis software, a substantive theoretical model was ultimately constructed. This model delineates the evolutionary path and internal mechanisms of network cluster behavior in campus events under the influence of cognitive biases. The grounded theory method was deemed highly appropriate due to its capacity for deeply exploring complex social processes and emergent phenomena directly from rich, context-specific data. [Results/Conclusions] The study found that the evolution mechanism of network cluster behavior in the context of campus hot topics mainly consists of five stages: public opinion induction, public opinion bias, public opinion diffusion, public opinion outbreak, and public opinion subsidence. Based on these findings, governance strategies for such campus network events have been proposed, including identifying triggering factors, avoiding cognitive biases, enhancing user literacy, promoting collaborative guidance, and mitigating secondary risks.

  • CHIYuzhuo, ZHANGBing
    Journal of library and information science in agriculture. https://doi.org/10.13998/j.cnki.issn1002-1248.25-0348
    Accepted: 2025-10-17

    [Purpose/Significance] Open scientific data policies play a pivotal role in promoting the open sharing, unrestricted access to, and reuse of scientific data, thereby enhancing research efficiency and driving innovation. Despite their significance, research on the diffusion of these policies has predominantly focused on policy formulation, often neglecting the critical aspect of policy adoption and implementation at the local government level. This study aims to addres this gap by comprehensively examining the factors that influence the adoption of open scientific data policies by prefecture-level governments in China. The research was motivated by the need to understand how these policies spread across different regions, as well as the underlying mechanisms that facilitate or hinder their adoption. In doing so, the study expands the existing knowledge base by shedding light on the dynamics of policy diffusion in the context of open scientific data, a relatively under-explored area compared to other policy domains. [Method/Process] To achieve its objectives, the study employed an integrated research methodology. First, it utilized a policy diffusion model, adapted from the well-established Berry model, to theoretically frame the research. This model was enhanced by incorporating insights from a comprehensive literature review, which helps identify key internal and external factors influencing policy diffusion. Second, the study employed the event-history analysis to empirically test these factors using data from 286 Chinese cities over the period from 2018 to 2022. This method allows for the examination of the temporal sequence of policy adoption and the identification of causal relationships between the influencing factors and policy diffusion. Finally, a fuzzy-set qualitative comparative analysis (fsQCA) was applied to refine the understanding of multiple causal configurations that lead to successful policy adoption. This approach captures the complexity and interdependence of factors in policy diffusion processes, offering a nuanced perspective that goes beyond traditional statistical methods. [Results/ [Conclusions] The study identified four primary pathways for the diffusion of open scientific data policies in China: resource-driven, organization-and-human-capital-led, multi-stakeholder collaborative, and technology-guided. The resource-driven pathway emphasizes the significance of research funding and the establishment of professional organizations in facilitating policy adoption. The organization-and-human-capital-led pathway highlights the role of government official mobility and a skilled workforce in driving policy diffusion. The multi-stakeholder collaborative pathway underscores the importance of coordinated efforts among various stakeholders, including government agencies, research institutions, and industry partners. Last, the technology-guided pathway focuses on innovation capacity and professional management as key drivers of policy adoption. The findings reveal a heavy reliance on administrative measures in driving policy diffusion, which may lead to unintended consequences such as policy sustainability issues and a lack of alignment with local needs. Therefore, local governments are encouraged to adopt tailored diffusion strategies that consider their specific contexts and resource endowments. Future research should explore the performance of these policies in achieving their intended outcomes and conduct comparative studies across different regions to enhance the generalizability of the findings.

  • JIANGJingze, ZHOUTianmin, LIMei, CHENGCheng, CHENHaiyan
    Journal of library and information science in agriculture. https://doi.org/10.13998/j.cnki.issn1002-1248.25-0289
    Accepted: 2025-10-17

    [Purpose/Significance] With the rapid advancement of artificial intelligence (AI), university libraries are undergoing a deep transformation from traditional resource repositories to intelligent service ecosystems. This transformation poses a significant challenge to the conventional competencies of librarians and underscores the necessity for a systematic reconstruction of these competencies. Existing studies often lack empirically supported and integrative models, and they seldom bridge the gap between AI application and competence development. To address these shortcomings, this study proposes a core competence model for hybrid AI librarians, integrating technical, service, and management dimensions. The research highlights its innovation by not only theorizing but also empirically validating the model through grounded data, positioning the study as a meaningful contribution to the discourse on digital librarianship. Different from previous literature, it integrates AI platform practices within the competency framework. This integration serves to enrich both theoretical underpinnings and enhance the practical applicability of the theory. This provides actionable implications for the sustainable development of librarianship in the context of national strategies for digital transformation and technological innovation. [Method/Process] The study employed a mixed-methods approach. First, a literature review was conducted to analyze trends in AI applications within university libraries. Then, semi-structured in-depth interviews were carried out with ten librarians from multiple universities that have deployed the DeepSeek intelligent platform. The participants covered technical, service, and management positions, with more than three years of experience using AI tools and a distribution across middle to senior professional titles. Following data collection, the grounded theory was applied with three levels of coding (open, axial, and selective) to inductively derive categories and explore how technical, service, and management competencies interact. The principle of data saturation was strictly observed to ensure methodological rigor, and no additional categories emerged after the three competency domains were established. [Results/ [Conclusions] Findings indicate that the core competencies of hybrid AI librarians revolve around three interdependent domains. Technical competence involves intelligent tool operation, data analysis, and system maintenance, supporting the integration of AI into daily workflows. Service competence emphasizes user-centered design, personalized recommendation, and human-AI collaborative interaction, ensuring that technical functions translate into user value. Management competence addresses resource allocation, cross-department collaboration, and ethical governance, safeguarding sustainability, compliance, and innovation. Together, these dimensions form a "technology-service-management" dynamic balance model, characterized by reinforcing loops in which technology drives service, service demands managerial support, and management stabilizes technology-service integration. Furthermore, a training and cultivation framework was proposed, offering differentiated professional pathways based on librarians' roles and growth stages. The study concluded that such a model not only enhances service effectiveness but also contributes to national innovation strategies. The study's limitations include its scope, which is limited to a single country and a small sample size. Future research should expand the sample base, employ comparative studies across institutions, and further examine the weighting of competencies.

  • HUORuijuan, ZHANGHai
    Journal of library and information science in agriculture. https://doi.org/10.13998/j.cnki.issn1002-1248.25-0467
    Accepted: 2025-10-09

    [Purpose/Significance] In the current era, libraries are essential to fostering a reading-oriented society because they act as key hubs for disseminating knowledge. The goal is to increase public cultural literacy and foster an intellectual atmosphere. However, the lack of a professional framework for promoting reading in libraries severely hinders these efforts. Without clear standards, activities lack systematic planning, which leads to inefficiency and an inability to address diverse reading needs. This study systematically examines the professional services available to library reading promoters. By examining professionalization dimensions and influencing factors, it fills a research gap, enriches library science theory, and provides guidance for cultivating high-quality reading promotion teams. [Method/Process] In-depth interviews were used as the primary method to ensure research rigor. Fifteen participants were selected using purposeful sampling, including library scholars, experienced reading promoters, and front-line librarians. Each interview lasted between 50 and 70 minutes and covered the status of reading promotion, the challenges involved, and future expectations. Three stages of grounded theory analysis were then applied: open coding to extract initial concepts, axial coding to establish relationships between concepts, and selective coding to form a theoretical model. This data-driven approach validates the results. [Results/Conclusions] The research has achieved significant results by identifying three core dimensions of professionalization. For literacy specialization, reading promoters are required to have a solid grasp of library science, literature, and educational psychology. Training specialization emphasizes the establishment of a systematic training program that covers promotion skills, event planning, and user communication. A well-designed training system can continuously improve the professional capabilities of reading promoters. Reading promotion specialization focuses on adopting evidence-based and innovative strategies, which can enhance the effectiveness of reading promotion. Four influencing factors were also discovered: the curriculum system determines the content and quality of training; the resource system provides necessary physical and digital assets for reading promotion; the user service system affects the communication and interaction with readers; and the standardization system provides guidelines for the evaluation of reading promotion work. Based on these findings, practical suggestions were put forward, including optimizing the training model by combining theoretical learning with practical operation and establishing a standardized management system for reading promotion. Nevertheless, the study has certain limitations, primarily due to its relatively small sample size. Future research could expand the scope of the sample, conduct long-term follow-up studies on the impact of professionalization, and explore integrating emerging technologies such as AI and big data into the professionalization of reading promotion to further promote the development of library reading promotion services.

  • ZHANGNing, HEBoyun
    Journal of library and information science in agriculture. https://doi.org/10.13998/j.cnki.issn1002-1248.25-0345
    Accepted: 2025-09-23

    [Purpose/Significance] The global population is aging at an unprecedented pace. As a key tool to address the challenges of digital inclusiveness for the elderly, developing a digital capital scale is of utmost importance. Digital capital not only encompasses the abilities and skills of the elderly in using information technology, but also focuses on the interaction among the social resources, cultural capital, and economic capital they acquire in the digital environment. Therefore, it helps enhance the theoretical understanding of the heterogeneity of the elderly's digital capabilities. [Method/Process] First, a semi-structured interview method was adopted to conduct in-depth interviews with 24 elderly individuals based on the digital capital framework, and combined with the digital life scenarios in China. We also referred to existing studies on the digital literacy and digital capabilities of the elderly. Based on the coding results of the interview transcripts, a 7-dimensional scale for measuring the digital capital of the elderly was derived. Then, a preliminary reliability and validity analysis was conducted on a pre-test sample of 180 respondents, and the dimension indicators were appropriately adjusted. Subsequently, using the data from 380 formal questionnaires, the scale was verified and improved. Based on the principle of conceptual interpretability, the factor names of the four dimensions were re-examined, and the final version of the scale was established. Elbow estimation and the K-means clustering algorithm were then used to classify the digital capital levels of the elderly. [Results/Conclusions] The final scale consists of 19 items, covering four dimensions: digital resource acquisition ability, digital creation and expression ability, digital environment adaptation ability, and digital tool learning ability. Following optimization, the scale demonstrates excellent reliability and validity, and aligns closely with the aging-friendly scenarios. The tool can be used as a standardized tool to measure the digital capital level of the elderly population in China, laying the foundation for future large-scale surveys. By applying this scale, it is possible to effectively distinguish between groups of elderly individuals with varying levels of digital capital, providing empirical support for personalized digital services for the elderly people. For the first time, this study systematically applies the digital capital theoretical framework to the elderly population, which compensates for the lack of standardized measurement tools and highlights the unique needs and challenges of the elderly in terms of the dimensions, usage scenarios, and capability transformation. The proposed hierarchical model of digital capital among the elderly deepens our theoretical understanding of the differences in digital capabilities among this population.

  • HOUYanhui, WANGZixuan, WANGJiakun
    Journal of library and information science in agriculture. https://doi.org/10.13998/j.cnki.issn1002-1248.25-0395
    Accepted: 2025-09-18

    [Purpose/Significance] Starting from the perspective of technological complementarity, this paper proposes a new approach for identifying technological opportunities by comprehensively using outlier patents and hot patents. The fusion analysis of innovative outlier patents and market mature hot patents is carried out to identify "innovation maturity" technological opportunities that combine innovation and maturity, which is of great significance for enriching the theory and methods of technological opportunity identification. [Method/Process] First, based on the "association distribution" characteristics of patent classification numbers, a two-stage method was adopted to screen patents. In the first stage, we used the association rule algorithms to find classification numbers with weak and strong associations, and obtained initial outlier patents and initial hotspot patents. In the second stage, outlier detection algorithms were used to obtain the marginalization classification numbers of the two types of patents in the first stage. Patents containing marginalization classification numbers were selected as the final outlier patents, while patents containing such classification numbers were removed as the final hotspot patents. Second, different methods were adopted for patent screening based on the differences in innovation and maturity of patent content. Using structured and unstructured data from patent databases, we constructed time weighted indicators and keyword uniqueness indicators as the screening indicators for innovative outlier patents. We constructed a technology lifecycle stage discrimination function and patent market value measurement indicators as the screening criteria for mature hot patents in the market. The screened patents were classified into technical fields based on the major categories in the International Patent Classification. Finally, we identified technological opportunities based on technological complementarity. By using the generative topology mapping algorithm to obtain a technical blank point map, the top K keywords in each blank point were obtained, and the sources of the keywords were marked to ensure that new technological opportunities have both good innovation capabilities and mature market prospects. In the future, keyword combinations derived from different types of patents were regarded as "innovation mature" technological opportunities. [Results/Conclusions] Taking the field of new energy vehicle batteries as an example, empirical analysis was conducted to obtain a total of 10 technical opportunities in 5 sub technical fields. Through content comparison with relevant policy texts, 7 technical opportunities showed high consistency. It was found that the identification results were highly consistent with the current technological layout and development direction of the field, indicating that this method has good effectiveness and scientificity in technology opportunity identification, and can provide support for technology prediction and innovation decision-making.

  • QINMiao, WANGQingfei
    Journal of library and information science in agriculture. https://doi.org/10.13998/j.cnki.issn1002-1248.25-0259
    Accepted: 2025-09-17

    [Purpose/Significance] With the rapid advancement of artificial intelligence (AI) technologies, libraries are transforming their service models and content offerings. Large AI models have opened up broader development opportunities for smart libraries. However, the rational adoption and application of these models has posed a significant challenge to libraries. This study employs multimodal resource profiling to conduct research on the optimization of large AI model utilization in libraries, revealing the intrinsic relationships among various types of library resource data. Based on these insights, the optimization methods and related strategies are extracted to enhance the efficiency of library resource utilization and improve user experience. [Method/Process] Multimodal resource profiling is a comprehensive representation that captures the intrinsic characteristics of library resources through tag extraction, aggregation analysis, and visualization of diverse data generated within the libraries. By utilizing a novel clustering algorithm, it overcomes the high sensitivity to input parameters characteristic of traditional algorithms and achieves natural clustering across resources with varying densities, thereby enabling the generation of accurate multimodal resource profiles. The resource profiling model provides a theoretical foundation for optimizing the deployment and utilization of large AI models in libraries, while also delivering rich data support for subsequent AI model applications. The adoption strategy proposed in this study is divided into two aspects: model selection and model utilization. Model selection focuses on compatibility and accuracy to achieve an optimal match between the model and both library resources and user needs. Model utilization emphasizes the effectiveness and usability of the output, thereby enhancing operational efficiency and user experience. Based on this framework, the overall operational mechanism of the adoption optimization strategy is designed around continuous model monitoring, real-time collection of user feedback, iterative model updates, and dynamic adjustment of multimodal resource profiles. [Results/Conclusions] This study takes a public digital library on "Telegram" as a case study to generate multimodal resource profiles, which meticulously categorize user groups, interests, and emotional intensities. By integrating the large AI model adoption optimization strategy with the outcomes of multimodal resource profiling, the model autonomously identifies the most task-relevant features, reducing the need for manual intervention. Not only does it achieve high prediction accuracy, but the explanatory feature weights it outputs also provide a quantifiable basis for service optimization. Through comparative experiments with commonly used structural modules, the proposed method demonstrates significant advantages over traditional recommendation systems in terms of both resource utilization efficiency and user engagement. This study lays the foundation for the future development of library technology and opens up new possibilities for the application of multimodal resource profiling.

  • GUOXiaojing, WENTingxiao
    Journal of library and information science in agriculture. https://doi.org/10.13998/j.cnki.issn1002-1248.25-0397
    Accepted: 2025-09-16

    [Purpose/Significance] In today's knowledge economy, where scientific research and innovation drive social change, accurately and scientifically assessing the social impact of scientific research achievements has become key to optimizing the global scientific research ecosystem. This article focuses on the social impact evaluation system of the international scientific research achievement. It provides in-depth analysis of typical international models and strategic guidance for China to build a more comprehensive and efficient evaluation system. [Method/Process] Based on the theoretical definition of the social impact of scientific research achievements, eight major cases of third-party evaluations were selected: the EU SIAMP, the US STAR METRICS, the UK REF, the Dutch SEP, the Italian VQR, the Canadian CAHS, the Australian ERA, and the Japanese NIAD-QE. Using a cross-national comparative analysis method, a comprehensive analysis was conducted across three dimensions: system elements (establishment time, establishing entity, main characteristics, evaluation scope, and strategic objectives), mechanism processes (definition of evaluation objects, establishment of evaluation procedures, application of evaluation results), and methodological tools (definition of social impact-related content, evaluation methods, and indicator content). Subsequently, relevant information was collected through literature research and online research to identify key characteristics. [Results/Conclusions] International evaluation systems are guided by national strategic needs and incorporate social impact into the entire research lifecycle management process through legislation. These systems also link influence to funding allocation. These systems operate using policy-driven mechanisms, collaborative efforts among stakeholders, data-driven methodologies, and dynamic feedback loops. The key characteristics of typical international research evaluation models are as follows: 1) Multi-dimensional indicators: Moving beyond traditional academic metrics, evaluation frameworks now encompass a wide range of impacts, including the effects of research outcomes on social welfare, industrial development, and employment. 2) Dynamic adjustment: As the socio-economic and technological environment evolves, the social impact evaluation systems of international research outcomes also undergo dynamic adjustments and innovations. 3) Multi-stakeholder collaboration: This involves diversified participation, cross-disciplinary and cross-departmental collaboration, and the full involvement of stakeholders throughout the process. Based on the above findings, this study offers insights at different stages of social impact assessment of scientific research achievements. Prior to implementation, additional indicators aligned with domestic strategic priorities, such as environmental sustainability, social equity, and cultural heritage preservation, should be incorporated alongside traditional metrics, and the policy and legal framework should be refined. During implementation, a multi-stakeholder collaborative evaluation platform should be established, and a dynamic system incorporating resilience coefficients should be developed to address uncertainties. After completion, a long-term monitoring and tracking mechanism should be implemented to understand ongoing impacts, with feedback-driven updates to the indicator system. This approach aims to foster a healthy evaluation ecosystem, accelerate the translation of research outcomes into societal value, and promote the integrated development of scientific research and social progress.

  • WEITianyu, LIUZhongyi, ZHANGNing
    Journal of library and information science in agriculture. https://doi.org/10.13998/j.cnki.issn1002-1248.25-0142
    Accepted: 2025-04-27

    [Purpose/Significance] Under the background of digital government construction, as a new type of service subject of human-machine collaborative governance, the influence mechanism of the social role positioning of government digital humans on public adoption behavior urgently needs theoretical exploration. Most existing studies have focused on the technical level. This study, based on the perspective of social role theory, explores the influencing mechanism of different role positioning of government digital humans in government service scenarios on public adoption behavior, which is of great significance for optimizing government services and improving the intelligent level of government services. [Method/Process] An experimental research method was adopted to construct a two-factor inter-group experimental design of "social role-business type", and a simulation experiment of government service scenarios was carried out through random grouping. Based on previous studies, we defined the role positioning of "advisors" and "decision-makers" for government digital humans, and constructed experimental scenarios by combining two service scenarios of consultation and approval. The subjects were randomly grouped to complete the role cognition test and human-computer interaction tasks. Data were collected by using the research path combining situation simulation and questionnaire survey. The psychological mechanism and decision-making logic of the public's adoption behavior were analyzed through the data analysis results. [Results/Conclusions] The research findings are as follows: 1) There is a significant interaction effect between the social roles and business types of government digital humans. In approval service scenarios, the decision-maker role is more capable of promoting public adoption behavior than the advisor role; 2) Human-computer trust perception plays a crucial mediating role in the influence path of social roles on the public's adoption behavior, revealing the core value of the trust mechanism in human-computer interaction; 3) The synergy effect between role authority and task fit constitutes an important mechanism influencing public cognition. This study expands the explanatory boundary of the social role theory in the field of intelligent government services and provides theoretical support for the construction of smart government services. However, there are still certain limitations. The service scenario simulation in the experimental design is difficult to fully restore the complexity of real government services. Future research can extend the multi-dimensional role classification system and deepen the mechanism exploration by combining the mixed research method. We have verified the applicability of the theoretical model in real government service scenarios and expand the existing conclusions. In addition, exploration on the dynamic impact of long-term interaction between government digital humans and the public on behavioral evolution is also a potential research direction.

  • WANG Yuanming, WANG Xueli
    Journal of library and information science in agriculture. https://doi.org/10.13998/j.cnki.issn1002-1248.25-0065
    Accepted: 2025-04-17

    [Purpose/Significance] The ongoing digital transformation has led to significant changes in public cultural services, particularly in content generation, communication channels, and modes of public participation. "Accessibility," a key indicator of the extent to which citizens' cultural rights are realized, is typically assessed along four dimensions: availability, acceptability, accessibility, and adaptability. Previous research has focused primarily on the supply side of accessibility, examining how factors such as the distribution of cultural resources, infrastructure development, and policy support affect user engagement. However, with the widespread adoption of digital technologies, individuals' ability and willingness to access information, utilize services, and provide feedback - collectively referred to as "digital literacy" - has become an increasingly important variable influencing cultural participation. Consequently, this study seeks to explore the relationship between users' digital literacy and the accessibility of public cultural services from a demand-side perspective. It aims to provide a more systematic theoretical framework and practical approach to optimizing the effectiveness of public cultural services. [Methods/Process] This study assesses users' digital literacy by examining their level of digital access, Internet usage, and service availability based on data collected from the Beijing-Tianjin-Hebei region. A structured questionnaire yielded 892 valid responses. To analyze the relationship between users' digital literacy and the accessibility of public cultural services, the study applies a generalized ordered logit model. A generalized ordered logit model is employed to analyze the substitution and overlap effects between users' digital literacy and the various dimensions of service accessibility. [Results/Conclusions] There is currently a digital divide exists between different demographic groups. A significant substitution effect is observed between traditional public cultural accessibility and users' digital literacy, with limited overlap between the two. Digitization has driven the modernization of public cultural resources and services, particularly in terms of technology and service delivery. However, there remains a time lag between the users' digital literacy of users and the digital transformation of the public cultural supply side. This lag suggests that the digital needs of users and the availability of digital cultural services are not fully aligned, which negatively impacts the effectiveness of public cultural services. Therefore, enhancing users' digital literacy, especially improving their ability to adapt to digital cultural resources, is a crucial factor in transitioning public cultural services from "accessibility" to "enjoyment". In promoting the digital upgrading of public cultural services, greater emphasis should be placed on developing users' capabilities and anticipating their needs.

  • Yuhong CUI, Jintao ZHAO
    Journal of Library and Information Science in Agriculture. https://doi.org/10.13998/j.cnki.issn1002-1248.24-0721
    Accepted: 2025-04-02

    [Purpose/Significance] The development of artificial intelligence generated content (AIGC) technology has engendered novel prospects for the establishment of creating inclusive and expansive learning environments. In light of the potential risks associated with the misuse of AIGC tools, the present study analyzes the factors influencing students' use of AIGC tools within the context of artificial intelligence literacy. It constructs a conceptual model framework and explores the relational paths among influencing variables, aiming to provide a theoretical basis for the advancement of AI literacy education in libraries and other educational institutions. [Method/Process] This study adopts a mixed-method approach that primarily integrates Structural Equation Modeling (SEM) and mediation analysis to explore the relationships between the factors that influence AIGC tool usage. A conceptual relationship model was constructed based on the Technology Acceptance Model (TAM), which is widely utilized model for assessing users' acceptance of new technologies. The study builds on this model by adding AI literacy as a key variable to examine its moderating role in shaping the students' use of AIGC tools. The data were collected via a survey disseminated to university students who have used AIGC tools. The survey incorporated a series of inquiries designed to assess constructs such as effort expectancy, performance expectancy, behavioral intention, AI literacy, and actual usage of the tools. The SEM approach was employed to assess the proposed hypotheses and to validate the relationships between the identified factors. Mediation analysis was employed to assess indirect effects between variables. [Results/Conclusions] The findings indicate that effort expectancy exerts a direct impact on the actual use of AIGC tools by students, and indirectly promotes usage behavior through performance expectancy and behavioral intention. Furthermore, AI literacy plays a crucial role in improving the conversion rate from intention to actual usage. Specifically, AI literacy significantly enhances students' acceptance of AIGC tools, especially in terms of increasing their practical ability to use these tools effectively. The research also identifies key factors that influence students' use of AIGC tools, such as performance expectancy, effort expectancy, and behavioral intention, and highlights the significant moderating effect of AI literacy on the relationships among these factors. This study provides empirical evidence for the effective integration of AIGC technology into the education sector and offers theoretical guidance for libraries and educational organizations on how to design AI literacy education programs that help students adapt to a digitally driven society. Future research may encompass a more extensive examination of the utilization of AIGC tools across different academic disciplines, with a particular emphasis on their implementation in specialized domains. Additionally, the proposed model may be refined to better accommodate a wider range of educational contexts and learning scenarios.

  • Chaochen WANG, Ayang QI, Xiaoqing XU, Linwei CUI
    Journal of Library and Information Science in Agriculture. https://doi.org/10.13998/j.cnki.issn1002-1248.24-0760
    Accepted: 2025-03-31

    [Purpose/Significance] Research on the behavioral stimulation of online social platform interactions triggered by IP-based games on users' active learning and reading, the social demands within gaming communities and their derivative reading-sharing interactions constitute dual intrinsic motivations that promote autonomous reading behaviors. Exploring new developmental directions for reading promotion through digital game dynamics and group-based social guidance provides broader research perspectives for innovative knowledge acquisition and pedagogical learning paradigms. [Method/Process] Based on the game "Black Myth: WUKONG" as the research background, we collected relevant comments and original texts from four social media platforms. Using the LDA model for topic classification of the effectively collected data, users who demonstrated marked behavioral tendencie towards book-related engagement and reading activities attributable to the "WUKONG" game experience were manually identified from the aforementioned dataset. We explored the details of their social discourse and the behavior of user accounts through user-account backtracking, studying the factors that stimulate users' interest in reading and active reading behavior. [Results/Conclusions] Analysis of the five thematic clusters identified by the LDA modeling revealed that in the user behaviors focused on the theme of cultural exploration, 38.3% of the user behavior data showed increased exploratory engagement with original literary works and related content during "WUKONG" mediated group interactions. Whether this interactive exploration closely connects to reading habits needs further study. Further research has shown that a portion of users were influenced in their subsequent behaviors by this game and social interaction. Through text mining of user content on key topics, analysis revealed that 61.15% of user accounts had no prior engagement history, representing first-time participants in the cultural learning interactions of the "WUKONG" game. Notably, 23.7% of this cohort spontaneously expressed self-directed reading intentions during the game-social scenario. As the dominant subgroup in the dataset, their behavioral patterns suggest that gamified social platforms may serve as critical trigger mechanisms. It was found that the factors that stimulate users to read independently include competing for the right to speak in social interactions and obtaining gaming experiences. Accordingly, strategic practices for autonomous reading should accordingly be implemented through digital content guides, transmedia narrative interactions, and visual scene experiences. This research investigates the orienting mechanisms of digital games and community interactions in edutainment convergence, demonstrating both theoretical value and practical implications for user behavior analysis and reading promotion. While the study design ensured breadth of data collection, the heterogeneity of social attributes across platforms warrants further investigation. Subsequent studies should conduct platform-specific comparative experiments to strengthen the empirical foundation for behavioral intervention strategies.