Thursday, June 1, 2023

The Disruptive Dependency Theory -- Part 5

My series on the Disruptive Dependency Theory continues. I've cut and pasted content from my Teaching Note on the Disruptive Dependency Theory into this blog post. As a result, there are some minor indentation and formatting errors. I request my readers to please ignore these, since they don't affect the main arguments. 

The Disruptive Dependency Theory argues that core nations use their political power to maintain dominance over periphery nations, and that this political influence often prevents periphery nations from acting in their own best interests and/or causes them to come to harm in myiad ways.

[~+~]

The Disruptive Dependency Theory also argues that the core nations use their political power to maintain their dominance over the periphery nations. The core nations are often able to use their political power to influence the policies of the periphery nations, and this has meant that the periphery nations are often unable to act in their own best interests.  

Value of the Theory

Any theory that can be predict with some degree of accuracy whether a nation will be invaded will be considered, by any standards, a useful theory. The value of this theory ought to be in the billions and billions of dollars in savings that rich nations can accrue by simply being more realistic about the world around them. Just one estimate - the United States alone would have saved more than 2 trillion dollars by following the recommendations of the theory.



Victoria’s Jubilee: Celebrating lots of ruling... 
and still not smiling on the coin. 👑😐 #RestingQueenFace


The Limitations of ChatGPT to write history

Many people have started looking at the use of ChatGPT to write history. One researcher (who has multiple Ph.D.s) looked at the performance of ChatGPT in identifying the best American Presidents to date. He found that, while some of the analysis is correct, there are notable lacunae. Some excellent presidents were missed. This reminds us of the oft repeated criticism of ChatGPT, viz., ChatGPT sounds like a over-confident sophomore. It is brimming in confidence but lacks in factuality.



Here are twenty different problems with using ChatGPT to write about history. It is worth noting that much of this was compiled by ChatGPT itself, and then the content generated by ChatGPT run through another A.I. It is clear that the latter A.I. is influenced by the speech of Mark Anthony in Shakespeare’s Julius Caesar, from where its references to “honorable man” appears to come from.

ChatGPT can provide a lot of useful information and help with many tasks. However, when it comes to writing history, there are several problems that need to be considered.

1.       Bias: ChatGPT's responses can be influenced by the data that was used to train it. This can lead to biases in its understanding of historical events. Bias is a major problem with writing history. It is very easy to write a history of the world with one definite bias or the other and write of said bias as not important. One could, for instance, write a version of colonial history which fails to take into account the enormous suffering laid upon the colonized over decades and decades of European colonialism. But this would be bad historical writing – of the sort one should avoid if you want to be considered a serious historian. But perhaps one would like to think that Niall Ferguson is an honorable man and would never stoop to inserting bias into his own writing.

2.       Lack of Context: ChatGPT may not have a complete understanding of the historical context surrounding an event. This can lead to incomplete or inaccurate explanations. One often needs to understand the context of certain events. It is certainly true that the Jallianwalla Bagh massacre, for instance, was conducted in a certain context. The man performing the summary execution of unarmed civilians was under the belief that such an action was justifiable, since he was doing it “for the sake of the Empire”. If such ruthlessly bloodthirsty behavior was justified by one man, it is then worth asking whether other people in the British armed forces also harbored such attitudes. And again, one would like to think that Niall Ferguson is an honorable man who would never write about history without considering all the attendant context.

3.       Limited Perspective: ChatGPT is limited by the information available to it, and may not be able to provide a complete picture of historical events. If Niall Ferguson were to claim that he did have full perspective, this is likely to strike most of us fatally flawed, even starting out. It is inconceivable that a single person could be familiar with all the languages spoken by the peoples colonized by Britain. And without reading their accounts of colonization, how can one know that one is right? And, sure, Niall Ferguson must be considered an honorable man who would simply admit to his own limited perspective.

4.       Misinterpretation: ChatGPT may misinterpret historical events, leading to inaccurate or misleading information. Now, this may be the beginning of where the argument that Niall Ferguson is an honorable man begins to significantly weaken. Given that numerous historians have


found his writing to full of misinterpretation, one must ask – “Is it really possible that one man holds so much information in his head that he is able to avoid all criticism of misinterpretation?”

5.       Inability to Interpret Emotion: ChatGPT may not be able to accurately interpret the emotions of historical figures, leading to a lack of understanding of their motivations. Back to Niall Ferguson. How can you interpret the emotion of someone writing in Tamil, Yoruba or Zulu when you don’t even know the language?

6.       Inability to Interpret Culture: ChatGPT may not be able to accurately interpret the cultural context of historical events, leading to a lack of understanding of their significance. Same criticism of Niall as (#5).

7.       Inability to Interpret Language: ChatGPT may not be able to accurately interpret historical documents that are written in a language it is not familiar with. Same criticism of Niall Ferguson as (#6).

8.       Inability to Verify Sources: ChatGPT may not be able to verify the accuracy of sources, leading to the possibility of using unreliable or biased information.

9.       Inability to Evaluate Evidence: ChatGPT may not be able to accurately evaluate the credibility of evidence, leading to the possibility of using inaccurate or misleading information.

10.   Lack of Critical Thinking: ChatGPT may not be able to engage in critical thinking or analysis of historical events, leading to oversimplification or inaccurate conclusions.

11.   Insufficient Historical Training: ChatGPT may not have sufficient training in historical research methods or historiography to produce accurate historical accounts. This is perhaps the only aspect of the criticism leveled at him that Niall Ferguson has been able to successfully refute. This guy has had more than enough “training.”

12.   Overreliance on Technology: ChatGPT may become too reliant on technology and algorithms, leading to a lack of creativity and originality in historical writing. 

13.   Inability to Recognize Historical Controversies: ChatGPT may not be able to recognize or address historical controversies, leading to the perpetuation of myths or misconceptions.

14.   Insufficient Empathy: ChatGPT may not be able to empathize with the experiences of historical figures, leading to a lack of depth and nuance in historical writing.

15.   Inability to Recognize Historical Patterns: ChatGPT may not be able to recognize historical patterns or trends, leading to a lack of understanding of how events are interconnected.

16.   Inability to Address Complexity: ChatGPT may struggle to address complex historical events or issues, leading to oversimplification or superficial analysis.

17.   Lack of Originality: ChatGPT's responses may be based on existing historical accounts, leading to a lack of originality or new insights.

 

18.   Dependence on Data Availability: ChatGPT's responses may be limited by the availability of historical data, leading to a lack of depth or detail.

19.   Insufficient Historical Contextualization: ChatGPT may not be able to provide adequate historical contextualization, leading to a lack of understanding of how events fit into larger historical trends.

20.   Inability to Recognize Historical Interpretations: ChatGPT may not be able to recognize different historical interpretations of events, leading to a lack of understanding of how historical accounts can differ.

`~

 

While ChatGPT can be a useful tool for various purposes, it is not well-suited for writing history. Its limitations—such as inherent biases, lack of contextual understanding, and inability to offer nuanced interpretations—make it less reliable than trained human historians. The problem with Niall Ferguson's work is practically virtually  the same. Niall Ferguson, for instance, overlooks a critical reality: many colonized nations were left impoverished, with starkly low Human Development Index (HDI) indicators, including limited access to education, when the British departed. A system that leaves countries in such a dire state—both economically and intellectually—cannot, by any measure, be considered a good one. How can someone overlook these sorts of numbers that almost anyone would think of looking at?

In the end, it's the human historians who should be trusted. After all, it is we who have developed insightful frameworks like the Disruptive Dependency Theory.

 

 

No comments:

Post a Comment

A quiz on literary insects

Now for something completely different. Now that we have done literature and history, for a change of pace, how about a quiz? Below is a min...