Skip to content
Myklob edited this page Mar 19, 2023 · 63 revisions

Enlightenment Promotion Algorithm Variables

Our goal is to effectively evaluate arguments by breaking them down into individual components and closely examining the strength of each component using various algorithms. This enables us to link the strength of our overall conclusion to the strength of the evidence supporting it. To ensure transparency, we will show our math and provide an open process. The algorithms track important data for each belief, including:

Belief Score:

This score reflects the overall strength of a belief and is continuously updated based on the performance of the following scores. It embodies the wisdom of thinkers throughout history, from Aristotle to Carl Sagan, who recognized that "Extraordinary claims require extraordinary evidence" and that "The wise man proportions his belief to the evidence." These principles form the foundation of scientific, moral, and human progress. And now, they are explicitly defined in software code for the first time, allowing us to engage with each other and reason simultaneously.

def rank_arguments(arguments):
    """
    Ranks a list of arguments based on their pro/con score and strength of evidence
    """
    for arg in arguments:
        pro_score = arg['pro_votes'] / (arg['pro_votes'] + arg['con_votes'])
        con_score = arg['con_votes'] / (arg['pro_votes'] + arg['con_votes'])
        strength_score = arg['evidence_strength'] # assuming this is a score from 0-1
        
        # Calculate the overall argument score by combining the pro/con and strength scores
        argument_score = (pro_score * strength_score) - (con_score * strength_score)
        
        arg['score'] = argument_score
        
    # Sort the arguments by score, highest to lowest
    sorted_arguments = sorted(arguments, key=lambda x: x['score'], reverse=True)
    
    return sorted_arguments

Logical Fallacy Score

Recognizing fallacious arguments is crucial for developing a forum for better group decision-making, creating an evidence-based political party, and for humans to make intelligent group decisions.

To achieve this, we propose using the scientific method of tying the strength of our beliefs to the strength of the evidence. The evidence takes the form of pro/con arguments for human arguments. These arguments are tied to data with logic. Therefore, we will explicitly tie the strength of our belief to the power, or score, of the pro/con evidence.

We will measure the relative performance of pro/con sub-arguments by weighing each argument using a specific logical fallacy. Once a user flags an argument as using a logical fallacy, or if the computer uses semantic equivalency scores to flag an idea as being similar to another idea that has been identified as using a logical fallacy, the site will create a dedicated space for reasons to agree or disagree with the fallacy accusation. The logical fallacy score reflects the relative performance of these accusation arguments, and our argument analysis algorithms will subject these arguments to grouping similar ways of saying the same thing, ranking the different types of truth separate from importance, and linkage (evidence to conclusion linkage).

There are several common types of fallacious arguments that are often used to support conclusions, but they are actually non-sequiturs, meaning they do not logically follow from the premises. Examples of these types of arguments include:

  • Ad hominem fallacy: This is when someone attacks the person making an argument rather than addressing the argument itself. For example, saying, "You can't trust anything he says because he's a convicted criminal," does not logically address the argument.
  • Appeal to authority fallacy: This is when someone claims something is true simply because an authority figure says it is true without providing any other evidence or reasoning. For example, saying, "Dr. Smith said it, so it must be true," does not logically prove that the argument is sound.
  • Red herring fallacy: This is when someone introduces a completely unrelated topic or argument to distract from the original argument. For example, saying, "I know I made a mistake, but what about all the good things I've done for the company?" does not logically address the issue.
  • False cause fallacy: This is when someone claims that because one event happened before another, it must have caused the second event. For example, saying, "I wore my lucky socks, and then we won the game, so my socks must have caused the win," does not logically prove causation.

By identifying and avoiding these fallacies, individuals can contribute to a more rigorous and evidence-based decision-making process, which can ultimately lead to a more effective political system and better-informed public opinion. The Logical Fallacy Score allows for the identification of specific fallacious arguments and promotes critical thinking and reasoned discourse.

Algorithm

  1. Identify a list of common logical fallacies.
  2. Allow users to flag arguments that may contain these logical fallacies.
  3. Enable users to share evidence and reasoning to support or weaken the belief that the argument identified in step #2 contains a logical fallacy.
  4. Develop a system to automatically flag arguments that are similar to other statements already flagged as containing a logical fallacy.
  5. Create a machine learning algorithm to detect language patterns that may indicate a particular fallacy.
  6. For each argument flagged as containing a logical fallacy, evaluate the score of logical fallacy sub-arguments that support or weaken the belief that the argument contains a logical fallacy.
  7. Use the results of these evaluations to assign a Logical Fallacy Score confidence interval.

It's important to note that the Logical Fallacy Score is just one of many algorithms used to evaluate each argument. We will also use other algorithms to determine the strength of the evidence supporting each argument, the equivalency of similar arguments, and more. The Logical Fallacy Score is designed to identify arguments that contain logical fallacies, which can weaken their overall credibility. By assessing the score of sub-arguments that contain fallacies, we can better evaluate the strength of an argument and make more informed decisions based on the evidence presented.

Code

# Define a list of common logical fallacies
logical_fallacies = ['ad hominem', 'appeal to authority', 'red herring', 'false cause']

# Define a dictionary to store arguments and their logical fallacy scores
argument_scores = {}

# Define a function to evaluate the score of a sub-argument for a given logical fallacy
def evaluate_sub_argument_score(argument, fallacy):
    # Implement your algorithm for evaluating the score of a sub-argument for a given logical fallacy
    # ...
    # Return the score of the sub-argument for the given logical fallacy
    return score

# Define a function to evaluate the logical fallacy score for an argument
def evaluate_argument_score(argument):
    # Initialize the logical fallacy score to 0
    score = 0
    
    # Iterate over each logical fallacy
    for fallacy in logical_fallacies:
        # Evaluate the score of the sub-argument for the given logical fallacy
        sub_argument_score = evaluate_sub_argument_score(argument, fallacy)
        
        # Add the score of the sub-argument for the given logical fallacy to the logical fallacy score
        score += sub_argument_score
    
    # Return the logical fallacy score for the argument
    return score

# Allow users to flag arguments that may contain logical fallacies
flagged_arguments = []

# Enable users to share evidence and reasoning to support or weaken the belief that the argument contains a logical fallacy
argument_evidence = {}

# Develop a system to automatically flag arguments that are similar to other statements already flagged as containing a logical fallacy
similar_arguments = {}

# Create a machine learning algorithm to detect language patterns that may indicate a particular fallacy
fallacy_detector = YourFallacyDetector()

# Evaluate the logical fallacy score for each argument flagged as containing a logical fallacy
for argument in flagged_arguments:
    # Evaluate the logical fallacy score for the argument
    argument_score = evaluate_argument_score(argument)
    
    # Assign the logical fallacy score confidence interval for the argument
    if argument_score < -2:
        confidence_interval = "Very likely fallacious"
    elif argument_score < 0:
        confidence_interval = "Possibly fallacious"
    elif argument_score == 0:
        confidence_interval = "No indication of fallacy"
    elif argument_score < 2:
        confidence_interval = "Possibly sound"
    else:
        confidence_interval = "Very likely sound"
    
    # Store the argument and its logical fallacy score
    argument_scores[argument] = {'score': argument_score, 'confidence_interval': confidence_interval}

Here is code for YourFallacyDetector:

import re

class YourFallacyDetector:
    
    def __init__(self):
        self.fallacies = {
            'ad hominem': ['ad hominem', 'personal attack', 'poisoning the well'],
            'appeal to authority': ['appeal to authority', 'argument from authority'],
            'red herring': ['red herring', 'diversion', 'smoke screen'],
            'false cause': ['false cause', 'post hoc ergo propter hoc', 'correlation vs causation']
        }
        
        self.patterns = {}
        for fallacy, keywords in self.fallacies.items():
            self.patterns[fallacy] = re.compile(r'\b(?:%s)\b' % '|'.join(keywords), re.IGNORECASE)
    
    def detect_fallacy(self, text):
        results = {}
        for fallacy, pattern in self.patterns.items():
            match = pattern.search(text)
            if match:
                results[fallacy] = match.group()
        return results

With this code, you can call the detect_fallacy method on any piece of text and it will return a dictionary of detected fallacies and the specific keyword that triggered the detection. For example:

detector = YourFallacyDetector()

text = "You can't trust anything he says because he's a convicted criminal."
results = detector.detect_fallacy(text)
print(results)  # {'ad hominem': 'convicted criminal'}

text = "Dr. Smith said it, so it must be true."
results = detector.detect_fallacy(text)
print(results)  # {'appeal to authority': 'Dr. Smith'}

text = "I know I made a mistake, but what about all the good things I've done for the company?"
results = detector.detect_fallacy(text)
print(results)  # {'red herring': 'what about all the good things I\'ve done for the company'}

text = "I wore my lucky socks, and then we won the game, so my socks must have caused the win."
results = detector.detect_fallacy(text)
print(results)  # {'false cause': 'my socks'}

Path Forward

  1. A large and diverse dataset: To train the machine learning models used in the system, it would be helpful to have a large and diverse dataset of examples of logical fallacies. This dataset would ideally include examples from a wide range of domains (e.g., politics, business, science) and from different types of media (e.g., news articles, social media posts, speeches).

  2. Domain-specific knowledge: Some types of logical fallacies may be more common in certain domains than others. For example, ad hominem attacks may be more common in political discourse than in scientific research. To improve the accuracy of the system, it would be helpful to incorporate domain-specific knowledge into the algorithms.

  3. Human input and feedback: While machine learning algorithms can be very effective at detecting patterns in large datasets, they may still make mistakes or miss certain nuances. To address this, the system could incorporate human input and feedback. For example, users could flag examples of logical fallacies that the system missed, or provide feedback on examples that were flagged incorrectly.

  4. Continual improvement: Like any machine learning system, the logical fallacy detection system would benefit from continual improvement over time. This could involve collecting new data, refining the algorithms, and incorporating feedback from users. As the system improves, it could become more accurate and effective at identifying logical fallacies, which could ultimately lead to better decision-making and more informed public discourse.

Evidence Verification Score (EVS)

This score measures the degree to which a belief has been verified from various forms of evidence. The score can consider the independence and quality of scientific studies, historical trends, social experiments, anecdotal evidence, and other relevant factors.

Calculating the EVS

To calculate this score, we assess the relative strength of each "evidence" proposed as reasons to strengthen or weaken a belief. The score considers the quantity and similarity of scenarios tested, the number of replications, and the degree of similarity. The score also considers the quality of the studies or evidence, including bias, methodology, and sample size.

Evidence Source Independence Weighting (ESIW)

This weighting is a critical component of our evaluation process that helps us determine the reliability of different types of evidence. Our weighting algorithm assigns scores to each type of evidence based on their level of independence.

To ensure transparency, we have implemented two separate pro-con arguments with up/down votes and other measures to promote and measure the quality of arguments. This helps us determine our confidence level in the appropriateness of the chosen category for each piece of evidence.

The reliability rankings of different types of evidence, sorted from most to least reliable (based on our current scoring system), are as follows:

  1. Statistics and Data with links to sources
  2. Formal scientific studies and results from experiments or trials (Meta-analysis, Systematic review, Randomized controlled trial (double-blind, single-blind), Cohort study
  3. Case-control study, Cross-sectional study, Longitudinal study, Observational study, Correlational study, Experimental study, and Quasi-experimental study. Each should have links to the published results.)
  4. Proposed historical trends (with references to data from history) Expert testimony from relevant authorities, official documents, reports, and published claims (with evidence to support the causal relationships)
  5. Expert and social media claims
  6. Personal experience or anecdotal evidence
  7. Common sense or logical reasoning
  8. Analogies or metaphors
  9. Cultural or social norms
  10. Intuition or gut feeling (based on evolved or adaptive ethics and morals)
  11. News articles or media reports
  12. Survey data or public opinion polls
  13. Eye-witness testimony Visual evidence such as photographs or videos
  14. Historical artifacts or documents

Rest assured, we'll show you our math and provide complete transparency throughout our evaluation process, so you can understand how each piece of evidence is weighted and the impact it has on our overall conclusion.

Evidence Replication Quantity (ERQ)

Used to account for the number of times a study or experiment has been replicated. ##Evidence Replication Percentage (ERP) To illustrate the use of ERQ and ERP, let's consider a hypothetical scenario in which a study has been conducted multiple times to examine the effects of a certain medication on a particular disease. The ERQ would take into account the number of times the study has been replicated, while the ERP would measure the percentage of replications that have produced similar results. By using these metrics, we can more accurately evaluate the reliability of the evidence and make informed decisions based on the strength of the supporting evidence and the reliability of the data.

Evidence-to-Conclusion Relevance Score (ECRS)

Introducing the Evidence-to-Conclusion Relevance Score (ECRS) - a key metric for the open internet evaluation process. The ECRS is the score given to the relevance of the evidence presented as reasons to support or oppose different conclusions. This score is calculated based on the performance of pro/con sub-arguments that the evidence would necessarily prove the conclusion if, for example, the evidence were infinitely replicated by double-blind scientific methods. Don't worry; we won't leave you wondering how this score is calculated. We'll show you our math and provide complete transparency throughout our evaluation process.

Here is an example of the code:

evidence_categories = {
    "statistics_and_data": 0.9,
    "formal_scientific_studies_randomized_controlled_trials": 0.85,
    "formal_scientific_studies_meta_analysis": 0.8,
    "formal_scientific_studies_observation_studies": 0.75,
    "proposed_historical_trends": 0.7,
    "expert_testimony": 0.65,
    "expert_and_social_media_claims": 0.6,
    "personal_experience": 0.55,
    "common_sense": 0.5,
    "analogies_or_metaphors": 0.45,
    "cultural_or_social_norms": 0.4,
    "intuition_or_gut_feeling": 0.35,
    "news_articles_or_media_reports": 0.3,
    "survey_data_or_public_opinion_polls": 0.25,
    "eye_witness_testimony": 0.2,
    "visual_evidence": 0.15,
    "historical_artifacts_or_documents": 0.1
}

# Define the evidence replication percentages for each piece of evidence
evidence_replication_percentages = {
    "study_1": 90,
    "study_2": 95,
    "study_3": 85
}

# Define the evidence replication quantities for each piece of evidence
evidence_replication_quantities = {
    "study_1": 5,
    "study_2": 10,
    "study_3": 3
}

# Define the Evidence-to-Conclusion Relevance Score (ECRS) for each piece of evidence
evidence_ecrs = {
    "study_1": 0.8,
    "study_2": 0.9,
    "study_3": 0.7
}

# Calculate the Category Weighting (ESIW) for each piece of evidence
evidence_esiw = {}
for evidence in evidence_categories:
    esiw = evidence_categories[evidence]
    evidence_esiw[evidence] = esiw

# Calculate the Evidence Verification Score (EVS) for each piece of evidence
evidence_evs = {}
for evidence in evidence_categories:
    evs = (
        evidence_esiw[evidence] *
        evidence_ecrs[evidence] *
        evidence_replication_quantities[evidence] *
        evidence_replication_percentages[evidence] / 100
    )
    evidence_evs[evidence] = evs

# Calculate the overall Evidence Verification Score (EVS) for the belief
belief_evs = sum(evidence_evs.values())

print("Overall Evidence Verification Score (EVS):", belief_evs)

Here is the type of code that could provide scores for each type of evidence:

# Sample data
arguments = [
    {
        "id": 1,
        "text": "Statistics and data are the most important type of evidence",
        "pro": True,
        "scores": [8, 7, 9, 6, 8]
    },
    {
        "id": 2,
        "text": "There are other types of evidence that are equally important",
        "pro": False,
        "scores": [5, 6, 7, 4, 7]
    },
    # Add more arguments here...
]

# Calculate the sum of scores for the pro arguments that agree that statistics and data are important
pro_sum = sum(arg["scores"][-1] for arg in arguments if arg["pro"] and arg["scores"][-1] >= 7)

# Calculate the sum of scores for the con arguments that disagree that statistics and data are important
con_sum = sum(arg["scores"][-1] for arg in arguments if not arg["pro"] and arg["scores"][-1] < 7)

# Calculate the statistics_and_data value as the ratio of the pro sum to the con sum
if con_sum != 0:
    statistics_and_data = pro_sum / con_sum
else:
    statistics_and_data = 1.0  # If there are no con arguments, assume a perfect score

print("The statistics_and_data value is:", statistics_and_data)