Abstract
Artificial intelligence(AI) is a well and indeed done deal, and now the AI economy is not only keeping itself alive but also being regarded as a force for transformation to transform fighting and its prediction and prevention into a global endeavor. Since it can use the power of massive data sources and machine learning and pattern recognition algorithms, AI systems can detect and warn of early signs of conflict so that decision-makers can get a head start. Much more specifically, an emphasis on data can amplify bias or produce incorrect predictions, undermining the trust in the results that AI promises to provide. There are significant ethical, political, and technical challenges to integrating AI into peacekeeping frameworks that need to be carefully walked along to use AI responsibly. It is this paper that studies those dimensions and looks to the future to analyze how AI might be distributed in conflict management.
Key Words
Artificial Intelligence, Conflict Prevention, International Security, Early Warning Systems, Peacekeeping, Machine Learning, Predictive Analytics
Introduction
One of the significant challenges is to understand and eliminate the causes of instability in our increasingly tense world. Over the past years, however, no matter how possible it was to settle disputes, the conflicts of interstate, coups, and civil wars, as well as regional differences, have kept on uprooting societies and destroying nations in society (Yawar, Sharify, and Sadat 2025). The political eco, economic, and sometimes social causes of these wars are more than enough for the reasons that come as a result of widespread suffering, death, and economic robbery. In this sense, innovative thinking is needed to expect the emergence and avoidance of conflict that will not become open violence. Artificial Intelligence (AI) has naturally addressed this challenge, with this new logic providing a means by which you will utilize a fantastic amount of data, process it, understand the info, and even predict what action would not be possible before.
AI is becoming a capability that can be used dramatically to enhance conflict prediction and prevention strategies across the globe. With civil conflicts, this conflict prediction includes a prediction of when civil conflicts will be explosive and the intervention, as early as possible, to cool down the conflict escalation (Horowitz et al. 2022). On the contrary, prevention is about taking preventive measures to stimulate the tension of conflicts before they become actual confrontations. In both cases, the promise of AI is to apply the same capabilities (to discover patterns and potential threats to peace) to large amounts of complex and diverse data of the kind (e.g., economic performance, social indicators, political stability, military activities, and even social media content), that it already applies to complex and diverse speech and text.
AI’s potential to predict and prevent conflict can be driven in many ways. That is being taken and applied to run machine learning algorithms and advanced predictive analytics, natural language processing, and data mining on historical and real-time data. These technologies allow us to make these systems better than humans at learning and reasoning, enabling these systems to notice the trends and correlations that would not occur to human analysts for various reasons like time, resources, or cognitive biases. For instance, AI can collate data from many sources or sources (including news reports and diplomatic communications) and social media to discover indications of social unrest or incipient political instability and warn early of its potential for conflict.
AI's speed and efficiency in conflict prevention are the most attractive parts of this technology. Human judgment and intuition, which perhaps suffer from biases and are otherwise limited to the extent of the width of the information about which we can make judgments, are what they depend on. However, the most gifted human analysts cannot process and analyze such considerable datasets in such short timeframes as the most powerful AI systems (Pauwels 2020). These systems are relatively easy to compute and quickly detect correlations between variables, such as the economy, political shifts, social unrest, and how it is likely to cause violence. Thus it intervenes timely on the diplomatic level, delivers humanitarian assistance, and deploys peacekeepers to curb the extremism of the ongoing conflicts and limit their negative effect on the stricken populations.
Other challenges and issues are tied to the use of AI in relation to international peace and security. The trust issue is the most important. An AI system is as accurate as the data it is trained on, and if the data itself is incomplete, biased, or inaccurate, the predictions it outputs from it can be flawed. For example, suppose the data the AI system was trained on is heavily biased in one particular view or omitted important parts; then the AI's predictions will likely increase current tension or mislead intervention plans. This is a fundamental risk: the quality of the input data used in AI to predict conflict and conflict prevention.
The second problem is AI decision-making transparency. Transparency is essential in conflict zones when there is a high stake in an environment where people's lives are at risk. The 'black box' problem of lack of accountability in using AI is a considerable risk, especially when a conflict is involved. This is particularly true if the AI system's decision-making process cannot be explained or if such an explanation is incomprehensible to a human. For example, decisions based on AI predictions of what is on the diplomacy or political matters — for instance, in cases leading to military interventions or peacekeeping missions — need to be transparent and relatable to the international community and the involved parties in the conflict (Hofmann 2025).
Also, it means the political and ethical consequences of AI in militia arenas are, too. All this shows that AI technologies can also be used for peacekeeping and manipulating foreign relations, surveilling, or general public deception by implementing targeted misinformation campaigns. This complicates the more general question of whether AI can or will be used as a weapon in warfare or diplomacy. If AI can be exploited for strategic benefit in the cyber sphere by performing hostile actions or automating military systems, it will create a new threat to global security. While many questions concerning the use of artificial intelligence (AI) remain essential, the following ethical questions around AI, among others, are pertinent: protecting human rights, ensuring that there is no misuse of new technologies, and when using new technologies, is intervention proportional and fair.
However, in this paper, we wanted to use AI to predict future conflicts and prevent such conflicts using AI tools. In particular, it will discuss how AI can enhance early warning systems, help boost diplomatic response to conflict, and support peacekeeping operations. I finally examine risks to the paper's data bias risk, accountability risk, transparency risk, and risk of AI misuse. As this technology develops, it is more important to understand the impact of AI technologies on international peace and security.
AI's potential in conflict prevention can be realized only if appropriately planned, regulated, and supervised. The power of prediction from AI can be very beneficial in anticipating and preventing conflicts, but the use of these technologies on international peace and security also depends on the combination of developing and using these technologies (Goldfarb and Lindsay 2021). To the extent that AI is like any other promises of the AI future, it goes without saying that AI can be used to avoid conflict, but that must be shoehorned into ethical and politically accountable parameters, transparency, and accountability to ensure the power of the AI uses peace rather than igniting conflicts. This is especially true, and this paper would contribute to discussions on how safe the application of AI to prevent conflicts would be in such a way that will minimize risks and complications in this application of AI.
Literature Review
In recent years, using AI to predict and prevent conflict has garnered enormous interest as the amount of literature on such possibilities and constraints has increased steeply. Scholars also highlight the role of data-driven decision-making in the arena of peace-building and conflict resolution, claiming that some of the early warning signals can be overlooked by human sensory organs; therefore, the aid of AI would be helpful to reach that information. Machine learning techniques have been used for one area of research, which is looking at historical conflict data to predict future violence instances.
Studies have presented several forecasting of conflict, which have been worked out very well by machine learning algorithms. Such algorithms can understand the correlation in historical data, such as the rise and decline of political regimes or economic recessions that might hint at the possibility of violence (Horowitz and Scharre 2021). However, from the other side of the coin, some researchers have claimed that the ability of AI to work with large data sets allows for more precise and quick predictions, allowing the possibility of interrupting incubation earlier and saving a life.
The literature has addressed AI-driven early warning systems (EWS). However, it distorts the data when making these systems accurate and unbiased, providing actionable insights. AI prediction scholars have also warned that just as the data that powers one's predictions is essential to determining the validity of one's prediction, data such as data of poor quality will lead to a false positive or negative scenario, aggravating conflict.
AI is also useful as an aid to making better diplomatic responses to conflicts and an early warning. Predictive analytics provide policymakers with the best policy strategies to prevent violence or escalate tensions. Since we know little about the types of conflicts that can occur, AI can help us decide the right course of action (mediate, sanction, or military intervention), depending on what we can simulate scenarios with the data available (Yankoski et al. 2021). While AI is now being used in diplomatic practices, there is a reason to be skeptical about transparency and accountability in the decision-making process regarding the use of AI. "Introducing AI into diplomacy could mean no empathy and a misunderstanding of what life in the current human world is like," some scholars say.
Despite using AI in conflict prevention, it also raises challenges and risks. Undoubtedly, bias in machine learning algorithms is to be noted, mainly if machine learning algorithms are based on the data upon which they are trained. Thus, the predictions may be systematically biased toward one group or region at the cost of the one with the least harm, thereby sacrificing one without considering its unintended consequence of exclusion. In addition, it is essential to add that AI is also involved in ethical issues for conflict prevention: some say that AI can be used as a weapon in times of bad intentions, for instance, for online surveillance or cyber war, adding to international security's dimension.
Also, transparency and interpretability were critical, and these depended on AI systems. Wherever decisions have to be made to determine if life or death was made on the other side of the screen, there can be no clarity regarding where some of the blame should be placed for AI-generated predictions, and that lack of clarity could corrode this trust in these technologies. However, critics say that relying on decisions in AI black boxes for such critical operations carries the risk of an apparent lack of understanding of the means and the ends by which specific outcomes arise (Masakowski 2020). A lack of transparency in intervention could diminish the credibility of interventions and be frustrating for trying to buy in across borders.
The literature shows AI's potential for predicting and preventing conflict by highlighting the importance of properly monitoring such developments. AI should also enhance the effectiveness of diplomatic practices and the prowess of peacekeeping operations and support the reliability of early warning systems. Nevertheless, in the effort toward international peace and security, AI must be responsibly used because of the risks related to data bias, transparency, and ethical concerns.
Research Question
The central research question for this study is: To what extent can artificial intelligence be used for conflict prediction and prevention, considering its risks and challenges to international peace and security? This question aims to clarify the possibilities offered by AI technologies to design more successful early warning systems and conflict prevention strategies and their development as options for considering the ethical, political, and technological potentialities of using responsible AI.
For this research question, you will have to show the potential practical uses of AI in conflict management, the type of AI-related AI-related data and machine learning models that could perform predictive analytics, and the part AI-based AI-based solutions might play in the work of international organizations. Additionally, the study will examine what is at risk regarding possible reduction in preexisting inequalities, the creation of new risks to global stability, and the need for oversight and regulation to counter these (Khan et al. 2023).
The aim of the research efforts, then, will be to create an overview of how the development of peace-building frameworks can accommodate, on a broad level, the development of AI while taking note of the associated risks and limitations required to ensure negative results. Answering this question will make the study contribute to the discussion surrounding new security technologies' role in changing global security.
Research Objectives
The main objectives of this research are as follows:
1. The objective is to investigate the possibilities of using AI for conflict prediction and prevention, including applying AI tools and methods for predicting conflict occurrence. To study options for using AI tools and methods to predict conflict occurrence, such as machine learning models, data analysis, and early warning system approaches. The objective is to consider ways to incorporate them to recognize those mils that can signal a conflict and give actionable indicators to the decision makers.
2. In this objective, we will analyze the risks and threats of applying AI to forecasting and avoiding conflicts to understand the moral, technical, and political difficulties of using AI in international peace and security. Issues discussed in the paper include the bias of the data, transparency of the algorithms, accountability of AI, or misuse of AI technology in conflict zones.
3. This objective aims to highlight the actual utilization of AI in currently existing early warning systems to evaluate how effective early warning systems based on AI are in averting conflict sensitivity.
4. This study seeks to secondly examine the role played by international institutions in controlling the usage of AI in conflict management by examining the magnitude of how international organizations, including the UN, are obligated to regulate the application of AI technologies for conflict prediction and prevention in order to adhere to international norms and standards.
5. This objective, therefore, aims to propose a set of policy recommendations on how AI could be responsibly integrated into conflict prevention frameworks by developing policies to advance AI's benefits but also limit AI's risks and challenges in peacekeeping efforts.
These objectives will enable leveraging this research in order to explore the critical lessons that AI's role in conflict management will be in the future. Secondly, they may also provide useful guidelines to policymakers and industry practitioners for enhancing international peace and security (Esberg and Mikulaschek 2021).
Research Methodology
For this research, the method to be adopted is the qualitative approach through literature review, case study analysis, interview with experts, and comparison analysis. Finally, this approach is aimed toward having more detailed knowledge of how AI can be used to foresee and prevent such a conflict as well as the opportunities and challenges that may exist in using AI to undertake matters related to international peace and security. These several research ways will help generalize and absorb the theory of view and the practice of AI for conflict management protocols with specific knowledge (Hossain et al. 2024).
Literature Review
The proposed research as a whole would also be a final output for a complete literature review of the status quo of AI's application to forecast and prevent conflict. Relying on academic sources, policy papers, published documents, and publications, we will review the articles that are necessary to deal with the matter. The central theme of this review is to identify what are the deficient thematic, theoretic, and methodologic elements of conflict prevention strategies based on AL-driven concepts. In particular, it will detail the theoretical ways to develop such AI systems for conflict management, i.e. early warning systems, predictive analytics, and machine learning. In addition, the literature review will focus on issues and concerns raised by the studies researched before such as the agents of AI used in the study, the ethical implications of utilizing AI in conflict (Agarwala and Chaudhary 2021), now that the resolution is being examined. In this review, we will analyze the status of current research and be able to identify those research gaps regarding the current state of the literature and subsequently frame how to approach future research in the use of AI to combat global security issues.
Case Study Analysis
In addition, it will also carry out an in-depth case study
regarding AI-driven conflict prediction and prevention initiatives that will
support the literature review. For instance, it includes calculating the
machine learning as a part of the early warning systems and building the
machine learning predictive models to forecast violence in a specific region.
In this research, we will look at the successes and failures of using AI in
conflict management (Horowitz, Kahn, and Mahoney 2020), using the cases to see
if they were successful or if it was not. For instance, the study could also
analyze how AI was used to monitor political instability, social unrest, and
other issues to do with the conflict. Deploying AI to such a complex, changing
environment would not allow AI to advance into such a case: preventing violence
with the case studies. It will also learn from these initiatives, and provide
steps toward how to follow these initiatives in future prevention and
peacebuilding efforts.
Table 1
AI Implementation in Conflict Zones
Region |
AI Applications |
Challenges |
Africa |
Early Warning
Systems, Predictive Analytics |
Data
Availability, Political Instability |
Middle East |
Diplomatic
Responses, Peacekeeping |
Geopolitical
Sensitivities, Algorithm Bias |
Southeast Asia |
Military
Strategy, Humanitarian Aid |
Ethical
Concerns, Local Infrastructure |
Expert Interviews
Interviews will be conducted with experts to
help understand the uses of AI in conflict prediction and prevention today.
Interviews with artificial intelligence, international relations,
peacebuilding, and conflict resolution professionals provide the responses. The
view on state-of-the-art AI technologies applied in conflict management from a
place where they are practiced from the first person is presented (Ramcharan and Ramcharan 2020). The articles will also discuss AI's ethical,
political, and technical tenets in the conflict zone, particularly on
accountability, transparency, and misuse risks. The outcome of AI in conflict resolution
will be addressed by experts, from their perspective on how AI could be used to
strengthen early warning systems to what contribution AI could give to
diplomatic responses and to support peacekeeping work in the future. Coupling
these interviews with the analysis based on the theoretical and the case study
analysis will validate the corresponding arguments and provide practical,
on-the-ground insights into the research.
Comparative Analysis
It will ascertain how practical the application
of AI in conflict prevention strategies could be and will be compared among
different places and situations. In terms of method, comparing AI applications
in conflict zones with distinct socio-political and economic conditions would
be done to discover common patterns, challenges, and best practices in using AI
applications. Therefore, research can study the use of AI unlike, for example,
in Africa, the Middle East, and Southeast Asia, in which the specific form of
conflict dynamics, for example, ethnic violence, instability of government, or
scarcity of resources, differ (Hoffman and Kim 2023). It will study how far the tool can be adapted
to suit the region and circumstance for the application across these regions
and find the similarities and differences in the applications of the AI. The
comparative approach will also provide a basis for identifying those factors
influencing the success and failure of AI-based efforts toward conflict
prevention using data available for analysis, installed infrastructure, and the
position of local stakeholders in the role of AI-based solution implementation.
By
applying these methods to the research, the study will offer a complete and
multi-dimensional understanding of the role of AI in international peace and
security. The literature review will provide practical AI applications based on
the case study analysis of AI and the theoretical one. To gain a closer view of
the opportunities and problems, even possible with the application of AI in
conflict management, we go to expert interviews and do comparative research in
different areas to find what worked best and what we learned. Together, these
methods will give an eye-opening look at how far AI can help in the promotion
of peace or as a cause of conflict.
This research methodology would provide a
comprehensive and fair view of how AI should be utilized in conflict prediction
and prevention and the pros and cons of adopting it responsibly.
Results and Findings
This research will provide some of the contributions of what could be some of AI's possible limitations in predicting and preventing conflict. Results so far indicate that early warning systems can significantly improve the accuracy and time to produce conflict forecasts in regions where AI's power is combined. Case studies using predictive models based on AI indicate that the models have been able to predict whether or not conflicts will emerge based on indicators of political instability, economic distress, or social unrest.
Key to those challenges have been challenges to data quality and algorithmic transparency. The reach of an AI system is limited by incomplete or biased data that leads to inaccurate predictions and the utility of intervention (Haney 2020). Beyond that, there is also a set of other problems regarding a lack of responsibility in AI decision-making processes, especially in the case of human lives.
However, as there are these obstacles, experts have noted that if restrictions are set in place for its use, AI has the potential to be an important player in international peace and security. In the future, transparency of AI models should play a crucial role, and the models should be built in accounting for the bias of the data and using it in such a way that only diverse, high-quality data is considered for building such models as to maximize predictive accuracy and minimize the bias.
Discussion
Thus, AI may furnish a tool for predicting and preventing conflicts, such as to satisfy the goals of international peacemaking. Yes, AI is working in this field, and it is a sodality for AI, but the AI challenge is quite challenging. One of the major concerns is data quality. Just like with most AI algorithms, a prediction is made based on the data within the algorithm’s pool, and if the data in the algorithm’s pool is terrible or even skewed, it leads to an error (Freeman 2021). In consequence, the findings from this study would provide a basis on which accurate predictions and tailored interventions would effectively fail to prevent conflict whilst making it more likely.
The second challenge is the lack of transparency in AI decisions. Decisions made by AI systems in conflict zones have immense weight, so it is important that they are transparent and explainable.
Therefore, the ethical implications of AI in conflict prevention are also important. Although AI may be able to establish early warning signs of conflict, this could devolve into a weapon of surveillance or even political manipulation.
There are also driving arguments as to why AI should be involved in peacekeeping and conflict prevention, as with anything in the realm of AI. AI allows these analyses at the keyboard and behind the screen and in real-time, from data to response, to direct those actions on the ground in support of diplomatic and humanitarian efforts in their more proactive approach to intervention. Moreover, AI can quickly and precisely analyze significant amounts and quantities of data, search for storylines, and develop trends that a human analyst can miss.
Therefore, the deployment of AI in predicting and preventing conflict should have clear ethical guidelines and regulatory frameworks to guarantee that AI is used responsibly. These frameworks should exist for the underlying goal of transparency, accountability, and fairness, and AI systems should be built and deployed with or for the interests of affected populations in mind. Second, we should continue to evaluate the uses and particularly the risks of AI technologies and periodically revise them as geopolitical dynamics shift (Beduschi 2022). However, our collective effort, led by governments, international organizations, and the private sector, will ultimately prevent AI from becoming a feature of conflict prevention. With the collaboration of stakeholders, AI systems can be made transparent, accountable, and compatible with international norms and standards.
Conclusion
Artificial Ignorance is the potential of Artificial Intelligence in handling problems of predicting and preventing conflicts can bring better international peace and security. AI can handle a lot of data very fast, and by doing so, it learns the patterns in real-time and allows diplomatic intervention, peacekeeping, and early warning systems that only AI can give. To illustrate this, AI can corral a large dataset, like predicting the conflict zones, to allow stakeholders to take quick but accurate actions to de-escalate tensions even before becoming violent. Likewise, it helps us foresee and predict what might be in store and trigger potential clashes or war to save lives and enjoy safe and sound environmental stability in war-prone or conflict-prone areas.
At the same time, however, AI also has roadblocks on its road to conflict prevention. One issue to be addressed is data bias, among others. The quality of the data that goes into the algorithm to train on is as good as it gets. The algorithm can return poor findings if the data used to train the AI system is incomplete or inaccurate. For example, if data used to train an AI model was from a politically biased past, if the data had not been used before to train an AI model, and if it did not have some demographic classes, the system might make false or unfair predictions. It can widen the existing inequalities, reinforce the stereotypes, or overlook the essential aspects of the causes of the conflict. Thus, achieving the desired outcome requires such crucial aspects as the ensured AI system training on high-quality, diverse datasets that faithfully reflect the intricacy of conflict dynamics in different regions and contexts.
The first and foremost challenge is the lack of transparency in AI decision-making processes. The thing is, in conflict zones, say war zones, where these decisions can get us to life or death in such a short time, first, AI systems have to be correct, then explainable, but foremost, they have to be audited. That is why, if not very transparent, it will not create trustworthiness about the AI system and responsibility for its issue. Decisions that are made based upon predictions from AI should be easy to explain decisions that affect the Ext 2 members, the people making those decisions, and people who need to substantiate them at the international level.
Aside from that, there are also ethical ones that come with AI-driven conflict prevention strategies. The uses of AI are either positive or negative. In contrast, even when pleasant, AI can help make real-time and accurate predictions in preventing conflict, as well as in terms of the practice of diplomacy and its area in peacekeeping works. Indeed, the potential negative side to the emergence of AI systems is that they can lead to their abuse in the propagation of geopolitical aims, the manipulation of public opinion, or the reproduction of conflict. For instance, one could then increase the ethical horizon, by implementing AI in surveillance, information warfare, or automated systems for the military. To advance this important goal, it is also necessary to establish moral rules for the use of AI in conflict prevention that will help guide the deployment of AI technologies in order to respect international law, justice, and fairness.
It is crucial to set up sound ethical and regulatory infrastructure to make the most of the benefits of AI and to mitigate those risks. These things should be committed to the priorities of transparency, accountability, and fairness around the design, deployment, and overseeing of AI systems. AI technologies should be adapted to the values and norms of the world community instead of only its technological effects. This will make governments, international organizations, as well as private sector entities, work hand in hand to ensure that AI is only used where it may serve well in a safe environment as it will meet standards.
All of this will need to be done in international collaboration to set global standards and norms for the use of AI in conflict zones. In ethical and geopolitical terms prevention of conflicts using AI is a complex process and requires the cooperation of governments, international organizations, and civil society. Collaboration and dialogue on the appropriate uses of AI and the means of responsible AI implementation create a shared understanding among stakeholders about the use and implementation of AI. In addition, this shared working approach would help mitigate data bias, transparency in the AI algorithms, as well as ethical questions over the use of AI to prevent conflict and promote world security.
With the increasing use of AI technologies comes the need for policymakers and practitioners to be attentive to the risks and unwanted consequences of using such technologies for the readiness and deployment of AI technologies. However, AI has enormous potential to assist in the promotion of peace and prevention of conflict, as long as it is used under design and responsibility understanding. But if AI invests a lot of effort into transparency, fairness, and collaboration, it can be an excellent tool for development; it can also be capable of creating a peaceful world. The future of AI in conflict prevention is bright, though the success of AI in conflict prevention will rely on bringing these new tools into frameworks for conflict management that extract the benefits and minimize the risks of these new tools.
References
-
Beduschi, A. (2022). "Harnessing the potential of artificial intelligence for humanitarian action: Opportunities and risks." International Review of the Red Cross 104 (919): 1149-1169. https://international-review.icrc.org/sites/default/files/reviews-pdf/2022-06/harnessing-the-potential-of-artificial-intelligence-for-humanitarian-action-919.pdf
- Esberg, J., & Mikulaschek, C. (2021). Digital technologies, peace and security: Challenges and opportunities for United Nations peace operations (United Nations Peacekeeping No. 7). United Nations. https://peacekeeping.un.org/sites/default/files/esberg_and_mikulaschek_-_conflict_peace_and_digital_technologies_-_v3_210825.pdf
- Freeman, L. (2021). Weapons of war, tools of justice: Using artificial intelligence to investigate international crimes. Journal of International Criminal Justice, 19(1), 35–53. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3949107
- Goldfarb, A., & Lindsay, J. R. (2022). Prediction and Judgment: Why artificial intelligence increases the importance of humans in war. International Security, 46(3), 7–50. https://doi.org/10.1162/isec_a_00425
- Haney, B. S. (2020). Applied artificial intelligence in modern warfare and national security policy. Hastings Science & Technology Law Journal, 11(1), 61–99. https://repository.uclawsf.edu/hastings_science_technology_law_journal/vol11/iss1/5
- Hoffman, Wyatt, and Heeu Millie Kim Kim. 2023. Reducing the risks of artificial intelligence for military decision advantage. Center for Security and Emerging Technology. https://cset.georgetown.edu/publication/reducing-the-risks-of-artificial-intelligence-for-military-decision-advantage/
- Hofmann, F. (2025). AI for peace. Oxford University Press.
- Horowitz, M. C., & Scharre, P. (2021). AI and international stability. Center for a New American Security. https://s3.us-east-1.amazonaws.com/files.cnas.org/documents/AI-and-International-Stability-Risks-and-Confidence-Building-Measures.pdf
Cite this article
-
APA : Ullah, M. U., Saleem, S., & Munir, A. (2025). Artificial Intelligence in Conflict Prediction and Prevention: Opportunities and Risks for International Peace and Security. Global Social Sciences Review, X(I), 192-200. https://doi.org/10.31703/gssr.2025(X-I).17
-
CHICAGO : Ullah, Muhammad Usman, Sahar Saleem, and Amina Munir. 2025. "Artificial Intelligence in Conflict Prediction and Prevention: Opportunities and Risks for International Peace and Security." Global Social Sciences Review, X (I): 192-200 doi: 10.31703/gssr.2025(X-I).17
-
HARVARD : ULLAH, M. U., SALEEM, S. & MUNIR, A. 2025. Artificial Intelligence in Conflict Prediction and Prevention: Opportunities and Risks for International Peace and Security. Global Social Sciences Review, X, 192-200.
-
MHRA : Ullah, Muhammad Usman, Sahar Saleem, and Amina Munir. 2025. "Artificial Intelligence in Conflict Prediction and Prevention: Opportunities and Risks for International Peace and Security." Global Social Sciences Review, X: 192-200
-
MLA : Ullah, Muhammad Usman, Sahar Saleem, and Amina Munir. "Artificial Intelligence in Conflict Prediction and Prevention: Opportunities and Risks for International Peace and Security." Global Social Sciences Review, X.I (2025): 192-200 Print.
-
OXFORD : Ullah, Muhammad Usman, Saleem, Sahar, and Munir, Amina (2025), "Artificial Intelligence in Conflict Prediction and Prevention: Opportunities and Risks for International Peace and Security", Global Social Sciences Review, X (I), 192-200
-
TURABIAN : Ullah, Muhammad Usman, Sahar Saleem, and Amina Munir. "Artificial Intelligence in Conflict Prediction and Prevention: Opportunities and Risks for International Peace and Security." Global Social Sciences Review X, no. I (2025): 192-200. https://doi.org/10.31703/gssr.2025(X-I).17