MADISON, Wis. (AP) — Wisconsin Democratic Gov. Tony Evers on Tuesday vetoed a redistricting proposal that the Republican-controlled Legislature passed last week in a last-ditch effort to avert the …
I don’t particularly disagree with what you’re saying. My criticism is that building these elaborately-complex algorithms that are deciding how to build the districts… it’s having the political apparatus chose the voters instead of the voters choosing the political apparatus.
In simple terms: a data-science approach to redistricting is TOO powerful. It is too effective at getting to a result. It fundamentally undermines the democratic process.
In a well-functioning democracy – which I am not going to pretend the US is – elections need to be competitive. They need to allow for surprises. Political arbitrage. Some amount of unfairness is the price we have to pay for that. And a theoretical algorithm that sufficiently takes into consideration EVERYTHING we need to ensure flawlessly “fair” elections where outcomes have a perfect proportional match to voter opinions/identities (and how do we even know they truly do in the first place?) may as well be trusted to pick the winners while it’s at it. We’re just in Foundation at that point.
It seems the more we discuss it, the eventual conclusion that emerges is that any act of subdividing representation (whether by geographical boundaries, demographics, or probably any criterion) inherently makes that representation less democratic under scrutiny. This is probably because any simplified/heuristic criterion will almost never be a perfect match to the specific type of representation an individual wants to prioritize, leading to mismatches between individuals and the defined criteria for “proportionate” representation.
So maybe the answer is broader elections with the voters themselves self-identifying the representation they want, and matching that to proportionally apportioned representatives? But I think we’ve just described political parties, so…I probably am not helping.
I don’t particularly disagree with what you’re saying. My criticism is that building these elaborately-complex algorithms that are deciding how to build the districts… it’s having the political apparatus chose the voters instead of the voters choosing the political apparatus.
In simple terms: a data-science approach to redistricting is TOO powerful. It is too effective at getting to a result. It fundamentally undermines the democratic process.
In a well-functioning democracy – which I am not going to pretend the US is – elections need to be competitive. They need to allow for surprises. Political arbitrage. Some amount of unfairness is the price we have to pay for that. And a theoretical algorithm that sufficiently takes into consideration EVERYTHING we need to ensure flawlessly “fair” elections where outcomes have a perfect proportional match to voter opinions/identities (and how do we even know they truly do in the first place?) may as well be trusted to pick the winners while it’s at it. We’re just in Foundation at that point.
It seems the more we discuss it, the eventual conclusion that emerges is that any act of subdividing representation (whether by geographical boundaries, demographics, or probably any criterion) inherently makes that representation less democratic under scrutiny. This is probably because any simplified/heuristic criterion will almost never be a perfect match to the specific type of representation an individual wants to prioritize, leading to mismatches between individuals and the defined criteria for “proportionate” representation.
So maybe the answer is broader elections with the voters themselves self-identifying the representation they want, and matching that to proportionally apportioned representatives? But I think we’ve just described political parties, so…I probably am not helping.