iconOpen Access

ARTICLE

crossmark

Algorithmic opacity and employees’ knowledge hiding: medication by job insecurity and moderation by employee—AI collaboration

Chunhong Guo1, Huifang Liu2, Jingfu Guo3,*

1 School of Business Administration, Dongbei University of Finance and Economics, Dalian, 116025, China
2 School of Economics and Management, Shandong Youth University of Political Science, Jinan, 250103, China
3 School of Economics and Management, Dalian Minzu University, Dalian, 116600, China

* Corresponding Author: Jingfu Guo. Email: email

Journal of Psychology in Africa 2025, 35(3), 411-418. https://doi.org/10.32604/jpa.2025.065763

Abstract

We explored the effects of algorithmic opacity on employees’ playing dumb and evasive hiding rather than rationalized hiding. We examined the mediating role of job insecurity and the moderating role of employee-AI collaboration. Participants were 421 full-time employees (female = 46.32%, junior employees = 31.83%) from a variety of organizations and industries that interact with AI. Employees filled out data on algorithm opacity, job insecurity, knowledge hiding, employee-AI collaboration, and control variables. The results of the structural equation modeling indicated that algorithm opacity exacerbated employees’ job insecurity, and job insecurity mediated between algorithm opacity and playing dumb and evasive hiding rather than rationalized hiding. The relationship between algorithmic opacity and playing dumb and evasive hiding was more positive when the level of employee-AI collaboration was higher. These findings suggest that employee-AI collaboration reinforces the indirect relationship between algorithmic opacity and playing dumb and evasive hiding. Our study contributes to research on human and AI collaboration by exploring the dark side of employee-AI collaboration.

Keywords

algorithmic opacity; job insecurity; knowledge hiding; employee-AI collaboration

Introduction

Modern artificial intelligence (AI) increasingly incorporates techniques such as machine learning, natural language processing, and computer vision (Hancock et al., 2020), and these systems make decisions autonomously, driven by large amounts of data received, analyzed, and interpreted (Brynjolfsson & Mitchell, 2017). Yet, algorithmic opacity occurs, which is when the failure of AI to provide explanations for why a particular decision was made that are understandable to users with little technical knowledge (Glikson & Woolley, 2020). Algorithmic opacity erodes employees’ trust in AI output (Endsley, 2023; Höddinghaus et al., 2021; Shin et al., 2022) and undermines employees’ self-determination and autonomy (Jobin et al., 2019; Kalyanathaya, 2022). The conditions under which algorithmic opacity would impair employees’ basic psychological needs are a less well-understood AI threat (Vaassen, 2022).

Algorithmic opacity and coping behaviour

Algorithmic opacity reduces employees’ decision-making performance (Yeomans et al., 2019) and affects employees’ job satisfaction and motivation (Wright et al., 2016). However, the coping behaviours that employees will adopt in the face of the threat of algorithmic opacity have yet to be clarified (Christin, 2020). Algorithmic opacity undermines employees’ psychological needs for autonomy, competence, and relatedness, which in turn creates job insecurity. Employees may be defensive in the face of job insecurity and engage in knowledge hiding (individuals intentionally withholding or hiding the knowledge that colleagues request; Connelly et al., 2012), which contains a defensive function (Soral et al., 2022; Wu, 2020). Thus, by using job insecurity as a mediating mechanism for algorithmic opacity to affect knowledge hiding, we reveal how algorithmic opacity affects knowledge hiding.

How algorithmic opacity affects job insecurity may vary by work context, with higher risks with employee-AI collaboration (i.e., employees and AI collaborate on tasks together; Kong et al., 2023). We propose that algorithmic opacity on job insecurity varies by the level of employee-AI collaboration, which would call into question the existing literature that blindly exaggerates the benefits of employee-AI collaboration.

Theoretical foundations

Self-determination theory (SDT, Deci & Ryan, 2000) proposes that basic psychological needs of autonomy, competence, and relatedness are critical to human activity. These needs in work settings promote psychological growth and healthy functioning, while the frustration of the needs is theorized to lead to energy depletion, dysfunction, and illness (Olafsen et al., 2017).

Algorithmic opacity and job insecurity

Based on SDT, we propose that algorithmic opacity can positively affect job insecurity by blocking employees’ autonomy, competence, and relatedness needs. First, algorithm opacity blocks the satisfaction of autonomy needs. Because employees lack a sufficient sense of control and trust in algorithms when they are unable to truly understand AI’s performance (Langer & König, 2023; Scherer et al., 2015), which can prevent employees from acting on the algorithms’ judgments. Therefore, algorithmic opacity undermines user autonomy by hiding significant ways to influence algorithmic outcomes. Notably, scholars have pointed out that algorithmic opacity can pose a threat to autonomy even for sufficiently reliable and fair algorithms (Vaassen, 2022).

Second, algorithmic opacity can lead to impeded fulfilment of competency needs. Because high AI opacity makes it impossible for employees to understand the decision logic behind AI (Shin, 2020), this can erode trust in AI outputs (Shin, 2020) and affect the efficiency of human-AI collaboration (Höddinghaus et al., 2021), thereby reducing decision effectiveness (Yeomans et al., 2019). When employees are not able to do their jobs effectively, the fulfilment of their competence needs is limited.

If managers manage employees using opaque algorithms, employees will question the fairness of managerial decisions, which can undermine relationships between subordinates and superiors (Cobb & Frey, 1996). Not only that, the perception of unfairness may also lead to increased jealousy, resentment, and competition among employees, which in turn affects relationships among coworkers (Cornelis et al., 2006). In short, the use of opacity algorithms can lead to impeded fulfilment of employees’ relational needs.

When employees’ autonomy, competence, and relatedness needs are not met, employees may perceive themselves to be at a more significant disadvantage in the ‘horse race’ with AI, given AI’s superior information processing capabilities (Chamorro-Premuzic & Ahmetoglu, 2016), exceptional and unbiased ability to recognize potential patterns in incomplete data sets (Parry et al., 2016), and lack of fatigue compared to humans (Tong et al., 2021). Therefore, employees may fear losing their jobs and thus experience a strong sense of job insecurity (Lawal et al., 2022; Toros et al., 2022).

The mediating role of job insecurity

Higher job insecurity indicates a greater perceived threat to employees’ jobs and careers from AI. Knowledge hiding is the process in which employees intentionally conceal or hide knowledge when their colleague requests, which consists of playing dumb, evasive hiding, and rationalized hiding (Connelly et al., 2012). Work insecure employees may engage in knowledge hiding as a resource-protection behaviour (Agarwal et al., 2022). An example of knowledge hiding is playing dumb, which is deceptive behaviour, pretending not to understand a word problem, and/or not wanting to provide help (Yuan et al., 2021). Another example is evasive hiding, which involves providing incorrect knowledge (Offergelt et al., 2019). However, with rationalized hiding no deception is intended; rather there is a valid reason or withholding relevant knowledge on the basis that it will be blamed by a third party (Connelly et al., 2012). Connelly et al. (2012) also point out that different motivations can drive the three types of knowledge hiding. Therefore, it is necessary to consider the differential effects of job insecurity on the three types of knowledge hiding.

Moderating role of employee-AI collaboration

Employee-AI collaboration may amplify the impact of algorithmic opacity on job insecurity. First, when the level of employee-AI collaboration is high, employees’ decisions may rely heavily on AI’s output (Raisch & Krakowski, 2021), and algorithmic opacity can lead to employees’ inability to decipher how AI works. In this case, employees will not only find it difficult to predict the outcome of AI’s work, but they will also be put in the awkward position of having no choice but to accept the AI’s decisions (Anthony et al., 2023), which would disrupt the interaction between the employee and AI (Lebovitz et al., 2022). Employees may fear of losing their decision-making power, particularly with unstructured or ‘higher-order’ tasks that call for decisional autonomy.

Algorithmic opacity can prevent employees from to gaining insight into the logical mechanisms behind AI decisions for maintaining or sustaining work productivity. This sense of powerlessness and frustration reduces employees’ job well-being, job satisfaction, and motivation (Wright et al., 2016) and may have adverse spillover effects on inter-employee and employee-supervisor relationships (Judge et al., 2001).

Goal of the study

We propose to test a moderated mediation model in which algorithmic opacity is indirectly related to playing dumb and evasive hiding through job insecurity, and this indirect relationship is moderated by employee-AI collaboration. Our specific hypotheses are as follows:

Hypothesis 1. Higher Algorithm opacity is associated with lower job insecurity.

Hypothesis 2. Job insecurity mediates the relationship between algorithmic opacity and (a) playing dumb (b) evasive hiding, but job insecurity is not related to rationalized hiding.

Hypothesis 3. Employee-AI collaboration amplifies the positive effect of algorithmic opacity on job insecurity.

Hypothesis 4: The indirect relationship of algorithmic opacity on (a) playing dumb and (b) evasive hiding through job insecurity is moderated by employee-AI collaboration, which is stronger when the level of employee-AI collaboration is high rather than low.

Method

Participants and setting

Only participants were invited to participate in this study. Participants were 421 employees interacting with AI daily at work. Males accounted for 53.68% of the sample, the average age was 30.48 (SD = 2.95), the average tenure was 2.54 (SD = 1.76), and 68.65% of the participants had a bachelor’s degree or higher. The participants’ positions included R&D, operation, sales, and design.

Measures

All variables in this study were measured from well-established scales with a rigorous translation-back-translation procedure. The question items were on a Likert 5-point scale (1 = completely disagree; 5 = completely agree).

Algorithmic opacity

We drew on Wanner et al.’s (2022) 3-item scale to examine algorithmic opacity, e.g., ‘I don’t understand why AI made the decision it did’ (α = 0.766).

Job insecurity

We used Staufenbiel & König’s (2010) 4-item scale to measure job insecurity, e.g., ‘I don’t think I will be able to keep my job in the future’ (α = 0.819).

Knowledge hiding

Knowledge hiding was measured using Connelly et al.’s (2012) 12-item scale (α = 0.704), in which playing dumb four items, such as, ‘When other colleagues ask me for knowledge, I pretend I don’t know what they are talking about’ (α = 0.819); evasive hiding four items, such as, ‘When other colleagues ask me for knowledge, I agree to help, but I don’t do so’ (α = 0.806); and rationalized hiding also four items, such as, ‘When other colleagues ask me for knowledge, I explain to them that the information is confidential and only available to people in certain positions’ (α = 0.780).

Employee-AI collaboration

The 5-item scale used by Kong et al. (2023) was adopted, with sample items such as ‘AI is involved in my decision-making process’ (α = 0.858).

Control variables

We controlled for gender (0 = ‘male’, 1 = ‘female’), age (years), Job tenure (years), and education (below college, college, bachelor’s degree, master’s degree and above) in our analyses because these demographic characteristics are believed to influence employees’ knowledge hiding significantly (Guo et al., 2022).

Procedure

The University of Shandong Youth University of Political Science provided ethics approval. The participants individually consented to the study. Data were collected utilizing Credamo, a professional data collection platform. Data were collected on three occasions, one month apart. Time 1, we asked employees to recall scenarios in which AI was applied to accomplish tasks in the workplace and subsequently completed questionnaires on algorithmic opacity and employee-AI collaboration; Time 2, we collected data on job insecurity following the same procedure; Time 3, we collected data on knowledge hiding. We matched the three waves of data using the last four digits of employees’ phone numbers as a tool to track them across time points. Participants receive approximately $0.43 as an incentive.

Data analysis

The proposed hypotheses were tested by constructing structural equation modeling in Mplus 8.3. Specifically, we tested the direct effects between variables by determining the significance of each path coefficient, and a 95% confidence interval (CI) was used for the mediating effect to assess the mediating effect of job insecurity. We mean-centered the independent and moderator variables and then multiplied these variables to produce interaction terms to test for moderating effects. We also estimated the simple slopes of employee-AI collaboration at lower (−1 SD) and higher (+1 SD) levels. We separately estimated the two sets of indirect effects at the high and low values of employee-AI collaboration. We calculated the significance of the difference between these two sets of indirect effects. In all analyses, we used bootstrapping (n = 5000) to assess the significance of the hypothesized direct, indirect, and moderated pathways and to calculate 95% CIs for effect sizes.

Validity test

Second, to test the discriminatory effects among variables, we conducted a confirmatory factor analysis (CFA) by Mplus 8.3, which showed that the fit indicators (χ2 (237) = 298.970, CFI = 0.983, TLI = 0.980, SRMR = 0.035, RMSEA = 0.025) of the 6-factor model (algorithm opacity, job insecurity, playing dumb, evasive hiding, rationalized hiding, and employee-AI collaboration) was the best, indicating that the variables used in this study possess good discriminant validity.

Common method bias test

When testing for homoscedastic method bias using Harman’s one-factor method, five factors had eigenvalues greater than 1. The first unrotated factor explained 19.794% of the variance, so there was limited impact due to common method bias.

Results

Descriptive statistics

The results of the descriptive statistical analysis are shown in Table 1. Algorithmic opacity was positively associated with job insecurity (r = 0.211, p < 0.01), job insecurity was positively associated with playing dumb (r = 0.476, p < 0.01) and evasive hiding (r = 0.506, p < 0.01), but not with rationalized hiding (r = 0.021, p > 0.05), thus providing initial support for the hypotheses.

images

Algorithmic opacity and job insecurity

Structural equation modeling achieved a good fit, χ2 (188) = 252.147, CFI = 0.974, TLI = 0.968, SRMR = 0.035, and RMSEA = 0.028. As shown in Table 2, algorithmic opacity positively affected job insecurity (B = 0.262, p < 0.001). Hence, Hypothesis 1 was supported.

images

Moderating effects of employee-AI collaboration

The interaction between algorithmic opacity and employee-AI collaboration positively affected job insecurity (B = 0.224, p < 0.001). The results of the simple slopes (see Figure 1) showed that the relationship between algorithmic opacity and job insecurity was significant when the level of employee-AI collaboration was high (B = 0.520, p < 0.001). However, the relationship was no longer significant when the level of employee-AI collaboration was low (B = 0.003, ns). Hence, Hypothesis 3 was supported.

images

Figure 1: Moderating effects of employee-AI collaboration

Mediating effects of job insecurity

The mediation model showed that algorithmic opacity was positively correlated with job insecurity (B = 0.262, p < 0.001), which in turn was positively correlated with playing dumb (B = 0.460, p < 0.001) and evasive hiding (B = 0.493, p < 0.001), but not with rationalized hiding (B = 0.036, ns). The results of 5000 bootstrap resamplings showed that algorithmic opacity had a significant indirect effect on both playing dumb (ab = 0.121, CI95 = 0.075, 0.173) and evasive hiding (ab = 0.129, CI95 = 0.079, 0.186) through job insecurity. This suggests that job insecurity mediates when algorithmic opacity affects playing dumb and evasive hiding. Therefore, Hypotheses 2 was supported.

Moderated mediation effects

The results of the tests for the moderated mediation effects are shown in Table 3. Employee-AI collaboration significantly affected the indirect effects of algorithmic opacity through job insecurity on playing dumb (the index of moderated mediation = 0.105, CI95 = 0.067, 0.144) and evasive hiding (the index of moderated mediation = 0.111, CI95 = 0.071, 0.153). The mean indirect effects of algorithmic opacity on playing dumb (ab = 0.236, CI95 = 0.181, 0.303) and evasive hiding (ab = 0.250, CI95 = 0.176, 0.324) were significant when employee-AI collaboration was high (+1 SD), but the mean indirect effects of algorithmic opacity on playing dumb (ab = 0.013, CI95 = −0.059, 0.066) and evasive hiding (ab = 0.014, CI95 = −0.049, 0.072) were no longer significant when employee-AI collaboration was low. In both conditions, the differences in the indirect effects of algorithmic opacity on playing dumb (ab difference = 0.223, CI95 = 0.143, 0.305) and evasive hiding (ab difference = 0.236, CI95 = 0.150, 0.324) were statistically significant, thus supporting Hypotheses 4.

images

Discussion

The findings suggest that algorithmic opacity is positively related to job insecurity. Although existing studies have not yet tested the direct relationship between algorithmic opacity and job insecurity, previous studies can also provide support for our findings. For example, many scholars have identified algorithmic opacity as a central source of AI threats (Vaassen, 2022), and threats in the workplace can create a strong sense of job insecurity among employees. In addition, algorithmic opacity diminishes feelings of control and autonomy at work (Langer & König, 2023), and the autonomy of work can, in turn, significantly predict job insecurity. Finally, Langer and König’s (2023) meta-analysis points out that algorithmic opacity also affects users’ job well-being, job satisfaction, and motivation.

We found that job insecurity significantly predicted employees’ playing dumb and evasive hiding, but job insecurity was not related to rationalized hiding. Existing literature exploring the relationship between job insecurity and knowledge hiding treats knowledge hiding as a one-dimensional construct and generally agrees that job insecurity is positively related to knowledge hiding (Arain et al., 2024; Chhabra & Pandey, 2023). However, the prior literature notes that different motivations can drive the three types of knowledge-hiding (Connelly et al., 2012). Connelly & Zweig (2015) found that evasive hiding and playing dumb, but not rationalized hiding, were associated with retaliation expectations and intentions. Zhao et al. (2016) found that workplace rejection predicts evasive hiding and playing dumb but not rationalized hiding. In addition, Guo et al. (2022) stated that territoriality is positively related to evasive hiding and playing dumb but not rationalized hiding, and territoriality creates protective motives in employees (David & Shih, 2024). The above inferences coincide with our findings.

In addition, our findings support that employee-AI collaboration plays a moderating role in algorithmic opacity and job insecurity. The relationship between algorithmic opacity and job insecurity is more positive when the level of employee-AI collaboration is high rather than low. This finding contrasts with existing research that has focused extensively on the positive impact of employee-AI collaboration. For example, it has been noted that employee-AI collaboration expands the scope of the search in problem-solving (Raisch & Fomina, 2023), improves employee performance (Guo et al., 2024) and creativity (Jia et al., 2024; Raisch & Krakowski, 2021), and enhance employees’ career sustainability (Kong et al., 2023). However, Tang et al.’s (2023) study also points out that collaborating with AI can have a dark side, and they emphasize explicitly that reliance on AI can threaten employees’ self-esteem. Our study extends this and points out that employee-AI collaboration reinforces the impact of algorithmic opacity on job insecurity. In doing so, we also alert future scholars to the possible dark side of employee-AI collaboration.

Managerial implications

First, our findings suggest that algorithmic opacity increases employees’ job insecurity and thus their self-protection by adopting knowledge-hiding behaviours, which inspires managers to use (at least partially) transparent algorithmic models and features to implement algorithm-based prediction and classification solutions (Guidotti et al., 2019). When transparency is challenging to implement, algorithms can also help employees understand the logic behind a particular prediction or classification result by explaining the system’s functionality or the rationale behind the system’s outputs through a posteriori explainability or interpretability methods (Ribeiro et al., 2016). Additionally, managers can provide regular training to employees on the fundamentals of AI and machine learning to improve the team’s overall trust in algorithmic decision-making and ability to monitor it effectively.

Second, to reduce the phenomenon of playing dumb and evasive hiding in organizations, managers should take steps to alleviate employees’ job insecurity due to the threat of AI. Managers can dispel misconceptions and encourage employees to embrace the opportunities presented by the technology by communicating to them through interactive workshops the true intent of AI deployment, i.e., that the organization deploys AI to support and assist employees in doing their jobs so that they can focus on higher-value tasks such as strategic or creative. In addition, managers can have regular career development conversations with employees to clearly define their roles, responsibilities and promotion paths, ensuring that employees understand personal growth opportunities and the company’s long-term commitment to them.

Limitations and future directions

This study’s first limitation is the possibility of common method bias and reverse causation. Although the data from different organizations enhances the generalizability of the study’s findings and the multi-wave data further reduces the concern of common method bias, the fears of common method bias are justified because all data came from employee self-assessments. Additionally, we encourage future studies to use experimental designs to test causality.

The generalizability of the sample data may be a limitation of this study. The generalizability of the findings may be limited because this study collected data in China. Chinese culture tends to be more collectivistic than Western culture. As Issac & Baral (2020) pointed out, there are significant differences in the drivers of knowledge hiding across cultures; therefore, the concern of whether protection motivation inspired by job insecurity would also drive employees to develop knowledge hiding behaviours in a Western cultural context is reasonable. Nonetheless, the findings suggest a need to explore other employee coping behaviours and the ‘dark side’ of employee-AI collaboration. Moreover, future research could test our theoretical model in different cultural contexts to assess the generalizability of the findings. Our study will inspire more researchers to explore other employee coping behaviours and the ‘dark side’ of employee-AI collaboration.

Conclusion

While scholars have generally viewed algorithmic opacity as a threat to AI, employees’ behaviours in response to this threat have received little attention. Our findings indicate that employees experience job insecurity due to algorithmic opacity. If with algorithmic opacity-related fears, employees will self-protect by playing dumb and evasive hiding. This outcome is heightened in employee-AI collaboration work settings, amplifying this indirect relationship. Our research enlightens managers to use (at least partially) transparent algorithmic models and to implement strategies to mitigate employees’ job insecurity simultaneously and, in turn, reduce knowledge-hiding behaviours in the workplace.

Acknowledgement: We gratefully acknowledge funding from Social Science Foundation of Liaoning Province.

Funding Statement: Work was supported by the Social Science Foundation of Liaoning Province (L23BJY022).

Author Contributions: The authors confirm contribution to the paper as follows: study conception and design: Chunhong Guo; data collection: Chunhong Guo, Huifang Liu; analysis and interpretation of results: Chunhong Guo, Huifang Liu; draft manuscript preparation: Chunhong Guo, Huifang Liu, Jingfu Guo. All authors reviewed the results and approved the final version of the manuscript.

Availability of Data and Materials: The corresponding author can provide the supporting data for this study upon reasonable request.

Ethics Approval: The studies involving human participants were conducted in accordance with the ethical standards of the institutional research committee and received approval from Shandong Youth Political College (Approval No. 202409), in line with the 1964 Helsinki Declaration, its later amendments, and equivalent ethical guidelines.

Informed Consent: All individual participants included in this study provided informed consent prior to their involvement.

Conflicts of Interest: The authors declare no conflicts of interest to report regarding the present study.

References

Agarwal, U. A., Avey, J., & Wu, K. (2022). How and when abusive supervision influences knowledge hiding behavior: Evidence from India. Journal of Knowledge Management, 26(1), 209–231. https://doi.org/10.1108/JKM-10-2020-0789 [Google Scholar] [CrossRef]

Anthony, C., Bechky, B. A., & Fayard, A. L. (2023). Collaborating with AI: Taking a system view to explore the future of work. Organization Science, 34(5), 1672–1694. https://doi.org/10.1287/orsc.2022.1651. [Google Scholar] [CrossRef]

Arain, G. A., Bhatti, Z. A., Hameed, I., Khan, A. K., & Rudolph, C. W. (2024). A meta-analysis of the nomological network of knowledge hiding in organizations. Personnel Psychology, 77(2), 651–682. https://doi.org/10.1111/peps.12562 [Google Scholar] [CrossRef]

Brynjolfsson, E., & Mitchell, T. (2017). What can machine learning do? Workforce implications. Science, 358(6370), 1530–1534. https://doi.org/10.1126/science.aap8062. [Google Scholar] [PubMed] [CrossRef]

Chamorro-Premuzic, T., & Ahmetoglu, G. (2016). The pros and cons of robot managers. Harvard Business Review, 12, 2–5. [Google Scholar]

Chhabra, B., & Pandey, P. (2023). Job insecurity as a barrier to thriving during COVID-19 pandemic: A moderated mediation model of knowledge hiding and benevolent leadership. Journal of Knowledge Management, 27(3), 632–654. https://doi.org/10.1108/JKM-05-2021-0403 [Google Scholar] [CrossRef]

Christin, A. (2020). The ethnographer and the algorithm: Beyond the black box. Theory and Society, 49(5), 897–918. https://doi.org/10.1007/s11186-020-09411-3 [Google Scholar] [CrossRef]

Cobb, A. T., & Frey, F. M. (1996). The effects of leader fairness and pay outcomes on superior/subordinate relations. Journal of Applied Social Psychology, 26(16), 1401–1426. https://doi.org/10.1111/j.1559-1816.1996.tb00078.x [Google Scholar] [CrossRef]

Connelly, C. E., & Zweig, D. (2015). How perpetrators and targets construe knowledge hiding in organizations. European Journal of Work & Organizational Psychology, 24(3), 479–489. https://doi.org/10.1080/1359432X.2014.931325 [Google Scholar] [CrossRef]

Connelly, C. E., Zweig, D., Webster, J., & Trougakos, J. P. (2012). Knowledge hiding in organizations. Journal of Organizational Behavior, 33(1), 64–88. https://doi.org/10.1002/job.737 [Google Scholar] [CrossRef]

Cornelis, I., Van Hiel, A., & De Cremer, D. (2006). Effects of procedural fairness and leader support on interpersonal relationships among group members. Group Dynamics: Theory, Research, and Practice, 10(4), 309–328. https://doi.org/10.1037/1089-2699.10.4.309 [Google Scholar] [CrossRef]

David, T., & Shih, H. A. (2024). Evolutionary motives in employees’ knowledge behavior when being envied at work. Journal of Knowledge Management, 28(3), 855–873. https://doi.org/10.1108/JKM-12-2022-1004 [Google Scholar] [CrossRef]

Deci, E. L., & Ryan, R. M. (2000). The what and why of goal pursuits: Human needs and the self-determination of behavior. Psychological Inquiry, 11(4), 227–268. https://doi.org/10.1207/S15327965PLI1104_01 [Google Scholar] [CrossRef]

Endsley, M. R. (2023). Supporting human-AI teams: Transparency, explainability, and situation awareness. Computers in Human Behavior, 140(4), 107574. https://doi.org/10.1016/j.chb.2022.107574 [Google Scholar] [CrossRef]

Glikson, E., & Woolley, A. W. (2020). Human trust in artificial intelligence: Review of empirical research. Academy of Management Annals, 14(2), 627–660. https://doi.org/10.5465/annals.2018.0057 [Google Scholar] [CrossRef]

Guidotti, R., Monreale, A., Giannotti, F., Pedreschi, D., Ruggieri, S. et al. (2019). Factual and counterfactual explanations for black box decision making. IEEE Intelligent Systems, 34(6), 14–23. https://doi.org/10.1109/MIS.2019.2957223 [Google Scholar] [CrossRef]

Guo, M., Brown, G., & Zhang, L. (2022). My knowledge: The negative impact of territorial feelings on employee’s own innovation through knowledge hiding. Journal of Organizational Behavior, 43(5), 801–817. https://doi.org/10.1002/job.2599 [Google Scholar] [CrossRef]

Guo, M., Gu, M., & Huo, B. (2024). The impacts of automation and augmentation AI use on physicians’ performance: An ambidextrous perspective. International Journal of Operations & Production Management, https://doi.org/10.1108/IJOPM-06-2023-0509 [Google Scholar] [CrossRef]

Hancock, J. T., Naaman, M., & Levy, K. (2020). AI-mediated communication: Definition, research agenda, and ethical considerations. Journal of Computer-Mediated Communication, 25(1), 89–100. https://doi.org/10.1093/jcmc/zmz022 [Google Scholar] [CrossRef]

Höddinghaus, M., Sondern, D., & Hertel, G. (2021). The automation of leadership functions: Would people trust decision algorithms?116(4), 106635. https://doi.org/10.1016/j.chb.2020.106635 [Google Scholar] [CrossRef]

Issac, A. C., & Baral, R. (2020). Knowledge hiding in two contrasting cultural contexts: A relational analysis of the antecedents using TISM and MICMAC. VINE Journal of Information and Knowledge Management Systems, 50(3), 455–475. https://doi.org/10.1108/VJIKMS-09-2019-0148 [Google Scholar] [CrossRef]

Jia, N., Luo, X., Fang, Z., & Liao, C. (2024). When and how artificial intelligence augments employee creativity. Academy of Management Journal, 67(1), 5–32. https://doi.org/10.5465/amj.2022.0426 [Google Scholar] [CrossRef]

Jobin, A., Ienca, M., & Vayena, E. (2019). The global landscape of AI ethics guidelines. Nature Machine Intelligence, 1(9), 389–399. https://doi.org/10.1038/s42256-019-0088-2 [Google Scholar] [CrossRef]

Judge, T. A., Thoresen, C. J., Bono, J. E., & Patton, G. K. (2001). The job satisfaction-job performance relationship: A qualitative and quantitative review. Psychological Bulletin, 127(3), 376. https://doi.org/10.1037/0033-2909.127.3.376. [Google Scholar] [PubMed] [CrossRef]

Kalyanathaya, K. P. (2022). A literature review and research agenda on explainable artificial intelligence (XAI). International Journal of Applied Engineering and Management Letters (IJAEML), 6(1), 43–59. https://doi.org/10.47992/ijaeml.2581.7000.0119 [Google Scholar] [CrossRef]

Kong, H., Yin, Z., Baruch, Y., & Yuan, Y. (2023). The impact of trust in AI on career sustainability: The role of employee-AI collaboration and protean career orientation. Journal of Vocational Behavior, 146(8), 103928. https://doi.org/10.1016/j.jvb.2023.103928 [Google Scholar] [CrossRef]

Langer, M., & König, C. J. (2023). Introducing a multi-stakeholder perspective on opacity, transparency and strategies to reduce opacity in algorithm-based human resource management. Human Resource Management Review, 33(1), 100881. https://doi.org/10.1016/j.hrmr.2021.100881 [Google Scholar] [CrossRef]

Lawal, A. M., Idemudia, E. S., Karing, C., & Bello, B. M. (2022). COVID-19 context and job insecurity among casual employees: The predictive value of education, financial stress, and coping ability. Journal of Psychology in Africa, 32(5), 440–446. https://doi.org/10.1080/14330237.2022.2121053 [Google Scholar] [CrossRef]

Lebovitz, S., Lifshitz-Assaf, H., & Levina, N. (2022). To engage or not to engage with AI for critical judgments: How professionals deal with opacity when using AI for medical diagnosis. Organization Science, 33(1), 126–148. https://doi.org/10.1287/orsc.2021.1549. [Google Scholar] [CrossRef]

Offergelt, F., Spörrle, M., Moser, K., & Shaw, J. D. (2019). Leader-signaled knowledge hiding: Effects on employees’ job attitudes and empowerment. Journal of Organizational Behavior, 40(7), 819–833. https://doi.org/10.1002/job.2343 [Google Scholar] [CrossRef]

Olafsen, A. H., Niemiec, C. P., Halvari, H., Deci, E. L., & Williams, G. C. (2017). On the dark side of work: A longitudinal analysis using self-determination theory. European Journal of Work & Organizational Psychology, 26(2), 275–285. https://doi.org/10.1080/1359432X.2016.1257611 [Google Scholar] [CrossRef]

Parry, K., Cohen, M., & Bhattacharya, S. (2016). Rise of the machines: A critical consideration of automated leadership decision making in organizations. Group & Organization Management, 41(5), 571–594. https://doi.org/10.1177/1059601116643442 [Google Scholar] [CrossRef]

Raisch, S., & Fomina, K. (2023). Combining human and artificial intelligence: Hybrid problem-solving in organizations. Academy of Management Review, 50(2), 441–464. https://doi.org/10.5465/amr.2021.0421 [Google Scholar] [CrossRef]

Raisch, S., & Krakowski, S. (2021). Artificial intelligence and management: The automation-augmentation paradox. Academy of Management Review, 46(1), 192–210. https://doi.org/10.5465/amr.2018.0072 [Google Scholar] [CrossRef]

Ribeiro, M. T., Singh, S., & Guestrin, C. (2016). Why should I trust you?: Explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, 1135–1144. 2016 Aug 13–17. [Google Scholar]

Scherer, L. D., de Vries, M.,, Zikmund-Fisher, B. J., Witteman, H. O., & Fagerlin, A. (2015). Trust in deliberation: The consequences of deliberative decision strategies for medical decisions. Health Psychology, 34(11), 1090–1099. https://doi.org/10.1037/hea0000203. [Google Scholar] [PubMed] [CrossRef]

Shin, D. (2020). User perceptions of algorithmic decisions in the personalized AI system: Perceptual evaluation of fairness, accountability, transparency, and explainability. Journal of Broadcasting & Electronic Media, 64(4), 541–565. https://doi.org/10.1080/08838151.2020.1843357 [Google Scholar] [CrossRef]

Shin, H., Nicolau, J. L., Kang, J., Sharma, A., & Lee, H. (2022). Travel decision determinants during and after COVID-19: The role of tourist trust, travel constraints, and attitudinal factors. Tourism Management, 88(3), 104428. https://doi.org/10.1016/j.tourman.2021.104428. [Google Scholar] [PubMed] [CrossRef]

Soral, P., Pati, S. P., & Kakani, R. K. (2022). Knowledge hiding as a coping response to the supervisors’ dark triad of personality: A protection motivation theory perspective. Journal of Business Research, 142(3), 1077–1091. https://doi.org/10.1016/j.jbusres.2021.12.075 [Google Scholar] [CrossRef]

Staufenbiel, T., & König, C. J. (2010). A model for the effects of job insecurity on performance, turnover intention, and absenteeism. Journal of Occupational and Organizational Psychology, 83(1), 101–117. https://doi.org/10.1348/096317908X401912 [Google Scholar] [CrossRef]

Tang, P. M., Koopman, J., Mai, K. M., De Cremer, D., Zhang, J. H. et al. (2023). No person is an island: Unpacking the work and after-work consequences of interacting with artificial intelligence. Journal of Applied Psychology, 108(11), 1766–1789. https://doi.org/10.1037/apl0001103. [Google Scholar] [PubMed] [CrossRef]

Tong, S., Jia, N., Luo, X., & Fang, Z. (2021). The Janus face of artificial intelligence feedback: Deployment versus disclosure effects on employee performance. Strategic Management Journal (John Wiley & Sons, Inc.), 42(9), 1600–1631. https://doi.org/10.1002/smj.3322 [Google Scholar] [CrossRef]

Toros, E., Maslakçı, A., & Sürücü, L. (2022). Fear of COVID-19 and job insecurity among hospitality industry employees: The mediating role of happiness. Journal of Psychology in Africa, 32(5), 431–435. https://doi.org/10.1080/14330237.2022.2121054 [Google Scholar] [CrossRef]

Vaassen, B. (2022). AI, opacity, and personal autonomy. Philosophy & Technology, 35(4), 88. https://doi.org/10.1007/s13347-022-00577-5 [Google Scholar] [CrossRef]

Wanner, J., Herm, L. V., Heinrich, K., & Janiesch, C. (2022). The effect of transparency and trust on intelligent system acceptance: Evidence from a user-based study. Electronic Markets, 32(4), 2079–2102. https://doi.org/10.1007/s12525-022-00593-5 [Google Scholar] [CrossRef]

Wright, A., Hickman, T. T. T., McEvoy, D., Aaron, S., Ai, A. et al. (2016). Analysis of clinical decision support system malfunctions: A case series and survey. Journal of The American Medical Informatics Association, 23(6), 1068–1076. https://doi.org/10.1093/jamia/ocw005. [Google Scholar] [PubMed] [CrossRef]

Wu, D. (2020). Empirical study of knowledge withholding in cyberspace: Integrating protection motivation theory and theory of reasoned behavior. Computers in Human Behavior, 105(2), 106229. https://doi.org/10.1016/j.chb.2019.106229 [Google Scholar] [CrossRef]

Yeomans, M., Shah, A., Mullainathan, S., & Kleinberg, J. (2019). Making sense of recommendations. Journal of Behavioural Decision Making, 32(4), 403–414. https://doi.org/10.1002/bdm.2118 [Google Scholar] [CrossRef]

Yuan, Y., Yang, L., Cheng, X., & Wei, J. (2021). What is bullying hiding? Exploring antecedents and potential dimension of knowledge hiding. Journal of Knowledge Management, 25(5), 1146–1169. https://doi.org/10.1108/JKM-04-2020-0256 [Google Scholar] [CrossRef]

Zhao, H., Xia, Q., He, P., Sheard, G., & Wan, P. (2016). Workplace ostracism and knowledge hiding in service organizations. International Journal of Hospitality Management, 59(5), 84–94. https://doi.org/10.1016/j.ijhm.2016.09.009 [Google Scholar] [CrossRef]


Cite This Article

APA Style
Guo, C., Liu, H., Guo, J. (2025). Algorithmic opacity and employees’ knowledge hiding: medication by job insecurity and moderation by employee—AI collaboration. Journal of Psychology in Africa, 35(3), 411–418. https://doi.org/10.32604/jpa.2025.065763
Vancouver Style
Guo C, Liu H, Guo J. Algorithmic opacity and employees’ knowledge hiding: medication by job insecurity and moderation by employee—AI collaboration. J Psychol Africa. 2025;35(3):411–418. https://doi.org/10.32604/jpa.2025.065763
IEEE Style
C. Guo, H. Liu, and J. Guo, “Algorithmic opacity and employees’ knowledge hiding: medication by job insecurity and moderation by employee—AI collaboration,” J. Psychol. Africa, vol. 35, no. 3, pp. 411–418, 2025. https://doi.org/10.32604/jpa.2025.065763


cc Copyright © 2025 The Author(s). Published by Tech Science Press.
This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
  • 623

    View

  • 218

    Download

  • 0

    Like

Share Link