When people encounter someone with a claim they strongly disagree with, they often use something like the following logic:
I believe “Y”, because I believe “X” and also “X implies (→) Y”. Since they believe “not Y”, then by the rules of implication, they must also believe “not X.”
Usually, Y is some kind of specific claim, like thinking a certain law is good or bad. X is usually closer to the moral side, and hence the argument usually justifies some incredulity and indignation. How could you not believe X? For example:
I believe “we should pass this law”, because I believe both “we should improve society” and also “if we want to improve society, we should pass this law”. Since they believe “we shouldn’t pass this law”, then they must believe “we shouldn’t improve society.”
There’s nothing wrong with this logic in isolation; the problems start when the other person, as well as not believing “Y”, doesn’t believe “X → Y” either.
In this case, there’s no reason at all they should have to believe X is false. They could believe X is true and Y is false with no problem; we want to improve society, but don’t think this law will do it. If this is the case, we have to dig deeper and explore whether it’s true that “X implies Y”. Otherwise, you end up just talking past each other based on completely different premises.
Diverse implications are what make claims controversial and divisive. If someone asks you if you believe Y, they are really asking whether you believe Y and all the surrounding claims they associate with it and use to justify it. If we want to properly rebuke a claim, or have any kind of useful discussion over a disagreement, then these surrounding claims need to be discovered and made explicit. A rebuttal must proceed from a common base set of claims and definitions which all parties agree with; and in discussion, it is necessary to spend time discovering what that base is.
Although it might seem obvious when pointed out, I suspect this kind of error occurs far more often than people think it does. It might explain the vast majority of, if not all, disagreements; even down to core beliefs like moral judgements, where others might assume we are fundamentally different. We justify our own choices by “X is good → we should do Y”, so when someone says we shouldn’t do it, the easiest conclusion is they are just an evil person.
Don’t mistake me for saying that morality doesn’t exist. If anything, I’m saying the opposite—people probably disagree about morality far less than is perceived. They might solely disagree on empirical facts related to the situation, or the relative weightings of good or bad things amongst other good or bad things.
For instance, two people might disagree about whether it is permissible to boil lobsters alive just because they disagree about whether lobsters can feel pain. Since the basis of their moral disagreement is this disagreement about the relevant neurological fact, if they agreed on this non-moral fact, we could expect them to agree about the permissibility of boiling lobsters alive.
Furthermore, although people might disagree about the permissibility of boiling lobsters alive, we may assume that they agree that pain is a bad thing, and the infliction of undeserved pain is prima facie wrong. If this assumption is correct, then the disputants agree about the moral facts here. They disagree only about the empirical, non-moral facts.
(From Intuitionism in Ethics - SEP.)
Of course, we’re coming from a very logic-based angle here. It’s possible that in discussion, we suspect the other person doesn’t have a good rational reason to not believe the implication step “X → Y”. Therefore, maybe they do believe “X → Y” and hence “not X”, and so really are evil/malicious.
Not necessarily; they might simply have inconsistent beliefs. In my experience, people massively downplay this possibility. They are much more likely to think someone holds a nonsensical position to conceal a underlying, consistent belief, than they are to think someone mistakenly believes the nonsense position is true. Maybe this is them projecting their belief that they are perfectly unbiased onto others, or wishfully thinking that biases or irrationality are easy to solve.
Our opponents are probably more similar to us than we think. This is no reason for alarm, or for throwing out any of our own beliefs. But it can help us figure out what people believe, and why, with greater detail.