Accepted: 2025-12-31
[Purpose/Significance] Generative Artificial Intelligence (GAI) has rapidly reshaped the landscape of social information dissemination, bringing unprecedented network public opinion risks-such as large-scale disinformation spread, algorithmic bias-induced social inequality, extreme emotional polarization, and model hallucinations leading to cognitive deviations-that significantly amplify the complexity, suddenness, and cross-domain spillover effects of public opinion evolution. These risks not only undermine the authenticity and order of information ecosystems but also pose severe challenges to social governance, public trust, and policy-making efficiency, making accurate identification, quantitative assessment, and early warning an urgent academic and practical task. Existing research has obvious limitations: single-dimensional assessment frameworks fail to capture GAI's multi-faceted and interrelated risks, such as the concealment of generated content, algorithmic recommendation amplification and cross-platform diffusion; traditional models such as basic BP neural networks suffer from susceptibility to local optima and poor generalization, inadequately adapting to the non-linear, dynamic, and high-dimensional attributes of GAI-generated content. To address these gaps, this study constructed a 4-dimensional risk assessment index system (content, dissemination, sentiment, and user) and proposed a GA-optimized BP neural network model, which will enrich public opinion management theories in the AI era and provide practical, efficient tools for precise risk control. It will contribute to the construction of a safe, orderly, and trustworthy online space. [Method/Process] A mixed research method with solid theoretical foundations (information communication theory and intelligent optimization algorithms) and empirical support was adopted: Ten typical GAI-induced public opinion events were selected from Sina Weibo (selection criteria: views ≥1 million, original posts ≥60, covering technology, society, public affairs, and consumption fields). Following a four-stage evolutionary model (formation, outbreak, mitigation, and recovery) and four early warning levels (Level I-IV, corresponding to binary outputs 1000, 0100, 0010, 0001) as specified in national emergency management standards, samples were systematically categorized into four evolutionary stages and corresponding risk grades. A 12-indicator system covering content (authenticity, misleadingness, and professionalism), dissemination (speed, scope, and diffusion path), sentiment (intensity, polarization degree, and negative ratio), and user (influencing impact, participant activity, and interaction stickiness) dimensions was constructed. The weights of each indicator were determined to ensure objectivity, and data preprocessing was performed via min-max normalization to eliminate dimensional differences. A 4-layer BP neural network (12 input neurons, 2 hidden layers with 15 and 10 neurons respectively, and 4 output neurons) was built, with initial weights, thresholds, and hyperparameters (learning rate and iteration times) optimized by genetic algorithm (GA). A traditional BP model served as the control group, with 70% of data as the training set and 30% as the test set, and model performance was evaluated based on prediction accuracy. [Results/Conclusions] Experimental results confirm the significant superiority of the GA-BP model: its prediction accuracy reached 91.67%, 8.34 percentage points higher than the traditional BP model (83.33%). This verifies that GA optimization effectively improved model performance, enabling better capture of complex non-linear relationships among GAI-induced risk factors. The multi-dimensional index system successfully extracted core risk characteristics, realizing comprehensive identification and traceability of GAI-related public opinion risks. Limitations of this study include sample concentration on Chinese social platforms, limited case quantity, and narrow time span. Future research will expand cross-border, multi-language samples (e.g., Twitter, Facebook), enrich technical indicators (e.g., GAI content identifiability, algorithmic intervention intensity), and explore integration with deep learning models (e.g., LSTM, Transformer) to further enhance the generalizability, real-time performance, and intelligent decision-making support capabilities of the risk assessment system.