Wednesday, January 16, 2013

Visual Root

http://www.visualroot.com/

Is a cool website...

I think it is very important that we map out how one idea is related to another...

It is very important that when we strengthen or weaken an assumption, that it automatically strengthen or weaken conclusions based on those assumptions...

I am trying to get an open source project going here:


that outlines the reasons to agree or disagree with each conclusion, and lets you use one conclusion as a reason to agree or disagree with another conclusion... 

Obviously a conclusion could be a good one, but still not support another conclusion... for instance "the grass is green" is a good conclusion but could not be used very well as a belief to "increase funding for the poor". So I would like to count the number of reasons to agree or disagree with each belief, and then count the number of reasons to agree or disagree with each linkage... 

I like counting reasons better than just up or down voting, because is forces you to back up your conclusion... and if you give a bad reason, it should have more reasons to disagree with it...

So my goal is to have the ratio of reasons to agree or disagree for each belief, and linkage... Also, if you are going to give scores to conclusions based on the ratio of reasons to agree vs disagree for their arguments and the ratio of reasons to agree or disagree for linkages between an argument and a conclusion, you would need one more factor... you would need a "unique" factor, so you could identify arguments that are essential saying the same thing, so you don't count those points twice...

It sounds complex, but it only has 3 numbers for each belief: ratio of reasons to agree vs disagree, ratio of reasons to agree or disagree this belief is a valid reason to support another belief... and the ratio of reasons to agree or disagree that this belief is a unique reasons (on the forum) to support another conclusion... If you multiply all these ratios, you should get a pretty good score for each belief, that can then be used to support other belief... As more people join the forum the numbers will change, but the numbers are not the important thing, the structure that we build that links one belief to another, and how these all interact, is what will allow artificial intelligence to understand how the human mind works...

Monday, December 31, 2012

Ethical Means and Ends

We should give more points to conclusions that have higher perceived ethicality of their methods and results

It's important for people to decide how strongly they support these conclusions and to consistently apply these rules. Often people are very flexible with their logic. For instance they will say that the ends justify their means when it supports their conclusion, but they will reject this line of argument when it opposes this argument.
Computers can help us with this. People can give a score to a particular philosophical question, a computer could then run the math, and tell you, based on those assumptions, which conclusions are more valid. This can help people be consistent with their thinking, and find logical fallacies.
The idea score should give more points to conclusions that have a consensus that the means (methods used to obtain the result) are ethical

There are mathematical ways we can give more points to conclusions that have higher perceived ethicality of their methods and results

 

For people who are good at math, this equation could be more formally represented with the fallowing equation and definitions.
Definitions
·         PES=Perceived Ethics Score: This can be added directly to the conclusion score, or we could use a multiplier on the ethics score, before adding the ethics score to the overall conclusion score.
·         Means
o   EMA (Ethical Means Asked) = The number of people who gave a score to the ethics of the means (or method) of a proposal.
o   EMEMA (Ethical Means): The score an individual gives a proposal for how ethical the means (or method) are (between 1 and 10).
o   C = A constant, such as 5, so that if 100% of a group of 50 people agree, than it will carry more weight
o   10 = This is not required, and could be removed if we ask people to pick a number between 0 and 1. People may think this is weird, and so using 10 will help the math represent the average score being a number between 0 and 1, so that if the average score was 8, our equation would give us .8, or a validity of 80%.
·         Ends
o   EEA = The number of people who gave a score to the ethics of the ends (or results) of a proposal.
o   EEEEA (Ethical Ends): The score an individual gives a proposal for how ethical the ends (or results) are (between 1 and 10).
o   EMA (Ethical Means Asked) = The number of people who gave a score to the ethics of the means (or method) of a proposal.
o   10 = See above
o   EEJ (Ethical Ends Justify):  The percentage of people who think that the ends (goals) justify the means (methods).

There are mathematical ways we can give more points to conclusions that have more reasons to believe they have ethical methods and results

I propose the basic Ethics Method score outlined below:
 

However, once this is working, we could tweak it a bit. It may be slightly more complicated, but I believe will give us better results.

Many beliefs have explicit actions. For instance, Barack Obama proposed that "we raise taxes for families who make more than $250,000 a year". This statement is a single action proposal. However, a single action can have many related ethical arguments.  For instance we can investigate the broader ethical question of a national income tax, or the ethics of a progressive national income tax, or the ethics of a specific national income tax that does not take into account cost of living, or family size.

Of course, if you familiarize yourself with my other equations, you will notice that I already count reasons to agree or disagree with each proposal. So for this item, we could just submit the argument about the ethics of a conclusion, as a standard argument. However, I see extra value of tagging an argument as a specific ethical argument that is related to either the method or the result.

Because an argument about the ethics of any of these sub-arguments, can also have arguments about their validity, we need to re-introduce the "Linkage score", and the use of n to represent the number of steps the sub-argument is away from the conclusion that we are currently giving a score:
 

·         n = number of "steps" the current arguments is removed from conclusion

We can use algebra to represent each term, and make it look a little more mathematical, with the below formula:
·         n:                     Number of "steps" the current arguments is removed from conclusion
·         AAEMn,i)/n:                  Arguments that Agree that a proposal has Ethical Methods. When n=1 we are looking at arguments that are used directly to support the belief that a conclusion's methods are ethical. The 2nd subscript is "i". This is used to indicate that we total all the reasons to agree. So when n=1, we could have 5 "i's" indicating there are 5 reasons to agree. These would be labeled A(1,1), A(1,2), A(1,3), A(1,4), and A(1,5). N on the bottom indicates that reasons to agree with reasons to agree only contribute ½ a point to the overall conclusion. Thus reasons to agree with reasons to agree with reasons to agree would only contribute 1/3 of a point, and so on.
·         ADEM(n,j)/n                  Arguments that Disagree that a proposal has Ethical Methods. Ds are reasons to disagree, and work the same as As but the number of reasons to disagree, are subtracted from the conclusion score. Therefore, if you have more reasons to disagree, you will have a negative score.  "J" is used, just to indicate that each reason to disagree is independent of a reason to agree.
·         The denominator is the total number of reasons to agree or disagree. This normalizes the equation, resulting the conclusion score (CS) representing the total percentage of reasons that agree. The conclusion score will range between -100% and 100% (or -1 and +1)
  
Many beliefs have explicit actions. For instance, Barack Obama proposed that we raise taxes for families who make more than $250,000 a year. This has unstated results. This may be somewhat complicated because people may disagree if a proposal really requires particular actions.

There are computer programming ways we can give more points to conclusions that have higher perceived ethicality of their methods and results, or more reasons to believe they are ethical.


Most all equations can be implemented on computers. All we have to do is build a forum that collects the above data. Then it is a simple matter of applying the above equations.


Friday, December 28, 2012

Example: The end does not justify the means.

Reasons to agree

  1. From a practical standpoint, if everyone thought the end justified the means, then the world would be a much worse place, because an extremist view of the ends justifying the means would allow you to kill those who disagreed with you. This would result in a lot of war, and murder. 
  2. A lot of people have justified their actions by saying that the end justifies the means.
  3. God will not require us to do evil, to defeat evil.
  4. People who say that the ends justify the means, cause more problems, trying to fix them, than if they would have just stayed out of it. Its better to live and let life.
  5. Just because an abortion may result in good things for the mother, and even society as a whole, doesn't mean that it is alright. You can't say that taking a life is ever justified. 


Reasons to disagree

  1. If killing is wrong, would you have killed Hitler, if you knew it would have saved millions of lives? The ends may justify the means, if in the long run it helps more people than it hurts. I would have killed Hitler.
  2. Sometimes the end justifies the means and sometimes it doesn't.
  3. The end does justify the means when the good guys are doing the justification.
  4. You can accomplish good by doing evil. 


We can use algebra to represent each term, and make it more formal mathematical, with the below formula and explanation of each term:


Ranking this conclusion by the ratio of reasons to agree vs. disagree (please add your reason to agree or disagree)


  • n: Number of "steps" the current arguments is removed from conclusion
  • A(n,i)/n: When n=1 we are looking at arguments that are used directly to support or oppose a conclusion. The 2nd subscript is "i". This is used to indicate that we total all the reasons to agree. So when n=1, we could have 5 "i's" indicating there are 5 reasons to agree. These would be labeled A(1,1), A(1,2), A(1,3), A(1,4), and A(1,5). N on the bottom indicates that reasons to agree with reasons to agree only contribute ½ a point to the overall conclusion. Thus reasons to agree with reasons to agree with reasons to agree would only contribute 1/3 of a point, and so on.
  • D(n,j)/n Ds are reasons to disagree, and work the same as As but the number of reasons to disagree, are subtracted from the conclusion score. Therefore, if you have more reasons to disagree, you will have a negative score. "J" is used, just to indicate that each reason is independent of the other.
  • The denominator is the total number of reasons to agree or disagree. This normalizes the equation, resulting the conclusion score (CS) representing the total percentage of reasons that agree. The conclusion score will range between -100% and 100% (or -1 and +1)
  • L: Linkage Score. The above equation would work very well, if people submitted arguments that they honestly felt supported or opposed conclusions. We could probably find informal ways of making this work, similar to how Wikipedia trusts people, and has a team of editors to ensure quality. However, we could also introduce formal ways to discourage people from using bad logic. For instance, people could submit that the "grass is green" as a reason to support the conclusion that we should legalize drugs. The belief that the grass is green, will have some good reasons to support it, and may have a high score. At first, to avoid this problem, I would just have editors remove bad faith arguments. But a formalized process would be to have for each argument a linkage score, between -1 and +1 that gets multiplied by the argument's score that represents the percentage of that argument's points that should be given to the conclusions points. See LinkageScore for more


Conclusion Score = [(5/1)xL - (4/1)xL]/(5+4)] (because I don't have this working yet with linkage scores lets assume L=1 for each argument) = (5-4)/9 = 11% valid. This might not sound good, but looking at the math you can see that values will range between -100% and +100% 

Monday, December 24, 2012

Half the Facts You Know Are Probably Wrong

http://reason.com/archives/2012/12/24/half-the-facts-you-know-are-probably-wro

Dinosaurs were cold-blooded. Increased K-12 spending and lower pupil/teacher ratios boost public school student outcomes. Most of the DNA in the human genome is junk. Saccharin causes cancer and a high fiber diet prevents it. Stars cannot be bigger than 150 solar masses.

In the past half-century, all of the foregoing facts have turned out to be wrong. In the modern world facts change all of the time, according to Samuel Arbesman, author of the new bookThe Half-Life of Facts: Why Everything We Know Has an Expiration Date (Current). 

Fact-making is speeding up, writes Arbesman, a senior scholar at the Kaufmann Foundation and an expert in scientometrics, the science of measuring and analyzing science. As facts are made and remade with increasing speed, Arbesman is worried that most of us don't keep up to date. That means we're basing decisions on facts dimly remembered from school and university classes—facts that often turn out to be wrong.

In 1947, the mathematician Derek J. de Solla Price was asked to store a complete set of The Philosophical Transactions of the Royal Society temporarily in his house. Price stacked them in chronological order by decade, and he noticed that the number of volumes doubled about every 15 years, i.e., scientific knowledge was apparently growing at an exponential rate. Thus the field of scientometrics was born.

Price started to analyze all sorts of other kinds of scientific data, and concluded in 1960 that scientific knowledge had been growing steadily at a rate of 4.7 percent annually for the last three centuries. In 1965, he exuberantly observed, "All crude measures, however arrived at, show to a first approximation that science increases exponentially, at a compound interest of about 7 percent per annum, thus doubling in size every 10–15 years, growing by a factor of 10 every half century, and by something like a factor of a million in the 300 years which separate us from the seventeenth-century invention of the scientific paper when the process began."

A 2010 study in the journal Scientometrics, looking at data between 1907 and 2007, concurred: The "overall growth rate for science still has been at least 4.7 percent per year."

Since knowledge is still growing at an impressively rapid pace, it should not be surprising that many facts people learned in school have been overturned and are now out of date. But at what rate do former facts disappear? Arbesman applies to the dissolution of facts the concept of half-life—the time required for half the atoms of a given amount of a radioactive substance to disintegrate. For example, the half-life of the radioactive isotope strontium-90 is just over 29 years. Applying the concept of half-life to facts, Arbesman cites research that looked into the decay in the truth of clinical knowledge about cirrhosis and hepatitis. "The half-life of truth was 45 years," he found.

In other words, half of what physicians thought they knew about liver diseases was wrong or obsolete 45 years later. Similarly, ordinary people's brains are cluttered with outdated lists of things, such as the 10 biggest cities in the United States.

Facts are being manufactured all of the time, and, as Arbesman shows, many of them turn out to be wrong. Checking each one is how the scientific process is supposed to work; experimental results need to be replicated by other researchers. So how many of the findings in 845,175 articles published in 2009 and recorded in PubMed, the free online medical database, were actually replicated? Not all that many. In 2011, a disquieting study in Naturereported that a team of researchers over 10 years was able to reproduce the results of only six out of 53 landmark papers in preclinical cancer research.

In 2005, the physician and statistician John Ioannides published "Why Most Published Research Findings Are False" in the journal PLoS Medicine. Ioannides cataloged the flaws of much biomedical research, pointing out that reported studies are less likely to be true when they are small, the postulated effect is likely to be weak, research designs and endpoints are flexible, financial and nonfinancial conflicts of interest are common, and competition in the field is fierce. Ioannides concluded that "for many current scientific fields, claimed research findings may often be simply accurate measures of the prevailing bias." Still, knowledge marches on, spawning new facts and changing old ones.

Another reason that personal knowledge decays is that people cling to selected "facts" as a way to justify their beliefs about how the world works. Arbesman notes, "We persist in only adding facts to our personal store of knowledge that jibe with what we already know, rather than assimilate new facts irrespective of how they fit into our worldview." All too true; confirmation bias is everywhere. 

So is there anything we can do to keep up to date with the changing truth? Arbesman suggests that simply knowing that our factual knowledge bases have a half-life should keep us humble and ready to seek new information. Well, hope springs eternal. 

More daringly, Arbesman suggests, "Stop memorizing things and just give up. Our individual memories can be outsourced to the cloud." Through the Internet, we can "search for any fact we need any time." Really? The Web is great for finding an up-to-date list of the 10 biggest cities in the United States, but if the scientific literature is littered with wrong facts, then cyberspace is an enticing quagmire of falsehoods, propaganda, and just plain bunkum. There simply is no substitute for skepticism.

Toward the end of his book, Arbesman suggests that "exponential knowledge growth cannot continue forever." Among the reasons he gives for the slowdown is that current growth rates imply that everyone on the planet would one day be a scientist. The 2010 Scientometrics study also mused about the growth rate in the number of scientists and offered a conjecture "that the borderline between science and other endeavors in the modern, global society will become more and more blurred." Most may be scientists after all. Arbesman notes that "the number of neurons that can be recorded simultaneously has been growing exponentially, with a doubling time of about seven and a half years." This suggests that brain/computer linkages will one day be possible. 

I, for one, am looking forward to updating my factual knowledge daily through a direct telecommunications link from my brain to digitized contents of the Library of Congress.

Sunday, December 23, 2012

How do you define a good conclusion?

How do you define a good conclusion? It is simple: a good conclusion has lots of good arguments that support it, and not very many good arguments that oppose it. But how do you know if an argument is any good? Well of course the turtle stack goes all the way down: good arguments have lots of good reasons to agree with them, and not very many good reasons to disagree with them.

Tuesday, December 18, 2012

Administrators



Until we have algorithms that can automatically promote better arguments (by rewarding good behaviors, punishing bad behaviors, and removing spam, and trolls) we may need administrators.

There are a number of ways of finding administrators. We could draw from the field of conflict resolution and dispute mediation. For instance we could offer training and give tests for skills that have been proven to resolve conflicts. There is a whole field of conflict resolution, which already has standards of training for good moderators.

For specific arguments, we could give slightly more weight to opinions by “certifiable experts” in that field. For each person who asserts they are an expert we could have an algorithm to determine how many extra points we would give their vote. I propose the following equation and list of definitions:



·         PRn:      Number of professors who remember or recommend a student.
·         PAn:      Number of professors who were asked to recommend a student. The database would have a form for sending a recommendation. It would have a list of known professors at a university that it would send the request to.
·         C:         Constant. This is needed because if you ask 1 teacher, and they recommend you, then we still are not 100% sure that you went to the school, or were a good student. The constant results in a situation where getting two of two recommendations would be better than getting 1 of 1, even though they are both 100%.
·         VESn:    Verifier’s expertise Score. A teacher’s level of expertise would be obtained by a similar equation, with their peers being the verifiers for each area of study.
·         RSn:      Recommender’s score. This multiple would allow teachers to weigh their recommendation, perhaps on a scale of 0 to 1.
·         RSn With a line over it:            This is the average score given out by a given teacher
·         SRn:      Number of fellow students who remember or recommend a student
·         SRn:      Number of fellow students who were asked to recommend a student. Similar to above, the database would have a form for sending a recommendation.
·         sn:        Score on a test designed to determine proficiency
·         Snbar:   Average
·         GPA:    Grade point average

Thursday, December 13, 2012

The main Algorithm

Abstract 

I propose that we build the SQL code that would facilitate an online forum. This forum would use a relational database to track reasons to agree and disagree with conclusions. It would also allow you to submit a belief as a reason to support another belief (see image 1 below): 


Figure 1: Arguments used to support other arguments

Arguments are currently made on websites, in books, and even in videos and songs. It would be powerful to outline all the arguments that agree or disagree with a conclusion and put them on the same page as seen below:



Figure 2: Arguments go from websites, books, songs, videos, into a relational database and are presented with their structure

Having the structure of how all these arguments are used to support each other, could allow us to automatically strengthen or weaken a conclusion's score based on the score of their assumptions.

The purpose of the Idea Stock Exchange is to find ways to give conclusions scores based on the quality and quantity of reasons to agree or disagree with them with an open sourced SQL database.
Pros and Cons are a tried and true method to evaluate a conclusion

Many people, including Thomas Jefferson and Benjamin Franklin advocated making a list of pros and cons, to help them make decisions. The assumption is that the quantity and quality of the reasons to agree or disagree with a proposed conclusion has some bearing as to underlining strength of that conclusion. I wholeheartedly agree. 

No one has yet harnessed the power of Pros and Cons in the information age. We can.

However, now that we have the internet, we can crowd source the brainstorming of reasons to agree or disagree with a conclusion.

The only trick is how do you evaluate the strength of each pro or con? Many people suggest putting the strongest pros or cons at the top of the list. Also, if we had enough time we might make a separate list FOR each pro or con.

For instance, FDR had to decide if we should join WWII or not. One pro might be that the German leaders were bad. There were many reasons to support this belief, and this belief was used to support another belief.

Not very many people have enough time to do a pro or con list for each pro or con. But on the internet we keep making the same arguments over and over again. For thousands of years we have been repeating the same arguments that Aristotle and Homer have made. Most of our arguments have been made thousands or millions of times. However no one has ever taken the time to put them into a database, and outline how they relate to each other. We can change this.

I propose that we find algorithms that attempt to promote good conclusions and arguments. This simplest and best method of scoring conclusions is to counting the number of reasons to agree, and subtracting the number of reasons that disagree. Because some arguments are better than other arguments, we should repeat this process for every argument until we reach verifiable data. The following equation represents this plan:

·         n = number of “steps” the current arguments is removed from conclusion



We can use algebra to represent each term, and make it look a little more mathematical, with the below formula:

·         n:                     Number of “steps” the current arguments is removed from conclusion
·         A(n,i)/n:             When n=1 we are looking at arguments that are used directly to support or oppose a conclusion. The 2nd subscript is “i”. This is used to indicate that we total all the reasons to agree. So when n=1, we could have 5 “i’s” indicating there are 5 reasons to agree. These would be labeled A(1,1), A(1,2), A(1,3), A(1,4), and A(1,5). N on the bottom indicates that reasons to agree with reasons to agree only contribute ½ a point to the overall conclusion. Thus reasons to agree with reasons to agree with reasons to agree would only contribute 1/3 of a point, and so on. If we decide to make the bottom of the equation n x 2, then these would contribute 1/6 of a point. It is obvious that some of their score should contribute to the conclusion scores, because weakening an assumption should automatically weaken all the conclusions built on that assumption. We could continually update n to give reasonable result, or each website could use its own secret sauce. 
·         D(n,j)/n              Ds are reasons to disagree, and work the same as As but the number of reasons to disagree, are subtracted from the conclusion score. Therefore, if you have more reasons to disagree, you will have a negative score.  “J” is used, just to indicate that each reason is independent of the other.
·         The denominator is the total number of reasons to agree or disagree. This normalizes the equation, resulting the conclusion score (CS) representing the total percentage of reasons that agree. The conclusion score will range between -100% and 100% (or -1 and +1)

The above equation would work very well, if people submitted arguments that they honestly felt supported or opposed conclusions. We could probably find informal ways of making this work, similar to how Wikipedia trusts people, and has a team of editors to ensure quality. However, we could also introduce formal ways to discourage people from using bad logic.

For instance, people could submit that the “grass is green” as a reason to support the conclusion that we should legalize drugs. The belief that the grass is green, will have some good reasons to support it, and may have a high score. At first, to avoid this problem, I would just have editors remove bad faith arguments. But a formalized process would be to have for each argument a linkage score, between -1 and +1 that gets multiplied by the argument’s score that represents the percentage of that argument’s points that should be given to the conclusions points.

I believe the most elegant way to come up with a linkage score would be to just make a new argument, that “a” supports “b”, with all the normal reasons to agree and disagree. However, I also propose the percentage of up-votes compared to the percentage of down-votes and other good idea promoting algorithms below.

Also, without editors, you would run into the problem of duplication. If we had this at the time of the Gulf Wars, people could have been submitting the belief that Saddam Hussein was a bad person as a reason to support the belief that we should go to war. People would submit the belief that we don’t go to war with everyone who is bad, as a way of weakening the linkage between this conclusion and argument. But someone might also submit the belief that he was “evil”. How much is the world “evil” and “bad” the same thing? Is Evil just a worse kind of bad? These questions could be quantified, if for each argument, we brainstormed a list of “other ways of saying the same thing”. Of course we would use all of our algorithms to determine to what degree they are the same thing. If we determine that two items are 85% the same thing, then when both of them are used as reasons to support the same thing, then they would only count as 1.15x their two scores, not 2x.

Examples

We might be arguing the conclusion that “It was good for us to join WWII.” Someone may submit the argument that “Nazis were doing bad things” as a reason to support the conclusion about entering the war. The belief that Nazis were doing bad things might already have a score. Let’s suppose that this idea score has a high ranking of 99%. This might be awarded a linkage score of 90% (as a reason to support the conclusion that we should have gone to WWII).  In this situation it would contribute 0.495 points (0.99 X 0.5) to the conclusion score for the beliefs that “It was good for us to join WWII”. Someone else might submit a belief that “Nazis were submitting wide scale systematic genocide” as a reason to support the belief that “It was good for us to go to WWII”. Because we don’t go to war with every country that “does bad things”, we would assume that this linkage score would be higher, perhaps a 98%.

For example the belief that Nazi Germany leaders were evil, is a belief with many argument to support it. However it can also be used as an argument to support other conclusions, such as the belief that it was good of us to join WWII.


Assumptions
·         Reason Belief used to support another belief(For example the belief that Nazi Germany leaders were evil, is a belief with many argument to support it. However it can also be used as an argument to support other conclusions, such as the belief that it was good of us to join WWII).
·         Good Belief Good Reasons to Agree > Good Reasons to disagree
·         Bad Belief Good Reasons to Agree > Good Reasons to disagree
·         Great Belief Good Reasons to Agree >> Good Reasons to disagree
·         Terrible BeliefGood Reasons to Agree << Good Reasons to disagree


Wednesday, December 12, 2012

We should structure online debates so reasons to agree and disagree with a belief are on the same page

You can't really win an argument by ignoring your opponent, and their arguments, and data.  

There are many things web designers can do to help people resolve their conflicts

Reasons to agree:
  1. It would help us move towards understanding if web forum designers rewarded those who can demonstrate that they understand those with whom they disagree with. 
    1. There are many ways discussion forum designers can reward those who demonstrate that they understand those whom they disagree with.
      1. Web-designers could test users ability to properly identify similar concepts, from multiple choice options.
        1. Perhaps people who have their comments evaluated could have special consideration in evaluating weather or not the person who disagreed got their statement right. 
      2. Maybe before you disagree with someone you have to put into your own words exactly which part you disagreed with. You could do this by highlighting or bolding the part that you disagree with. 
  2. Web designers would help online debate if they created web forums that allowed users to identify specifically which portions of text they agree and disagree with. 
    1. Not identifying exactly which portion you disagree with results in confusion.
    2. Psychologist could help out in this section. 

If we entered our beliefs and arguments into databases, there are many features of relational databases that could help us come to better conclusions


  1. If our beliefs and arguments were entered into a relational databases, we could: 
    1. tag arguments as either a reason to agree or disagree with a particular belief. This would be beneficial because: 
      1. We could post the results so that reasons to agree or disagree with a conclusion would be on the same webpage.
      2. It would be beneficial to have all the reasons to agree and disagree with a belief on the same page.  
    2. assign scores to arguments
    3. assign scores to beliefs, based on the score of the arguments for and against the beliefs
    4. assign scores to beliefs, based on other beliefs that are used to support or oppose them. For instance the belief that the middle class should get a tax break, has many reasons to agree or disagree with it, and it can also be used as a reasons to support or oppose other beliefs, like the belief that we should support politicians who agree or disagree with a middle class tax cut. 
    5. tag them with intelligent meta data, to allow computers to help organize the argument for us. 

We need to back up our beliefs with clear logic and well found reasoning



Reasons or arguments people use to agree:
  1. Evidence-free metaphysical speculations or politicized wish-fulfillment fantasies will destroy us.
    1. We can't just adopt socialism because it makes us feel good, without first knowing that it will work, and that it won't put our good freedom loving nice guys at a disadvantage in competition with non freedom loving dictators. 
  2. Bertrand Russell was right when he said. "It is undesirable to believe a proposition when there is no ground whatsoever for supposing it is true."
  3. When you make an assumption you make an ass out of you and me. 
  4.  If we don't use good logic to make our arguments, we will come to bad decisions. 
  5. If we want to survive as a species, we need to make good decisions. 
  6. Our beliefs affect our happiness
    1. If you want to enjoy your life, you should spend your time on rewarding activities. 
  7. Our beliefs affect our actions.
  8. Our beefs affect our personal success
    1. If believe it is important to not be seen as a a nerd, and we believe nerds are well educated, we will not want to be well educated. Your chances for success will be improved with education.

Our conclusions and reasons to coming to them are all tied together in complex nonlinear ways similar to a relationship database

  1. Our conclusions have many reasons to agree and disagree with them and each of these beliefs has many reasons to agree and disagree with them. As these arguments branch out and arguments multiply, it becomes too much for our brains to handle all at once.
  2. Assumptions are beliefs that are used to support other beliefs. If you change one assumption, it will change the strength of each conclusion that builds on that assumption. In a relational database you can say 5 people live together, then when you change one person's address, it can change all of their addresses. In a similar way, if we strengthen or weaken any assumption in a relational database, it will strengthen or weaken all of the conclusions that are based on these assumptions. Defining all these relationships is the only way we can ever make any progress at weighing all the data that we have.   

We should crowd source a database of things that people believe and arguments they use


  1. We need to back up our beliefs with good logic  Score: 9
  2. We can build a relational database that outlines our beliefs relatively cheaply 
  3. If we entered our beliefs and arguments into databases, there are many features of relational databases  that could help us come to better conclusions. 
  4. If we can sequence millions of lines of Human DNA, you would think that we could organize our thoughts and beliefs. 
  5. You need advanced scientific methods to sequence the human genome, but all you need is a database to outline the things people believe.
  6. If you use a relational database to associate arguments with the beliefs they support, you could design a scoring system  that analyze the validity people's arguments, and then the cumulative validity of their beliefs.