Notes & Miscellany

From Yuval Noah Harari's "21 lessons for the 21st century:"

Well, maybe Tesla will just leave it to the market. Tesla will produce two models of the self-driving car: the Tesla Altruist and the Tesla Egoist. In an emergency, the Altruist sacrifices its owner for the greater good, whereas the Egoist does everything in its power to save its owner, even if it means killing the two kids. Customers will then be able to buy the car that best fits their favourite philosophical view. If more people buy the Tesla Egoist, you won't be able to blame Tesla for that. After all, the customer is always right.

An alternative way to think about the Fake News story the liberal world is telling itself, as per Harper's Magazine:

One reason to grant Silicon Valley’s assumptions about our mechanistic persuadability is that it prevents us from thinking too hard about the role we play in taking up and believing the things we want to believe. It turns a huge question about the nature of democracy in the digital age—what if the people believe crazy things, and now everyone knows it?—into a technocratic negotiation between tech companies, media companies, think tanks, and universities. But there is a deeper and related reason many critics of Big Tech are so quick to accept the technologist’s story about human persuadability. As the political scientist Yaron Ezrahi has noted, the public relies on scientific and technological demonstrations of political cause and effect because they sustain our belief in the rationality of democratic government. Indeed, it’s possible that the Establishment needs the theater of social-media persuasion to build a political world that still makes sense, to explain Brexit and Trump and the loss of faith in the decaying institutions of the West. The ruptures that emerged across much of the democratic world five years ago called into question the basic assumptions of so many of the participants in this debate—the social-media executives, the scholars, the journalists, the think tankers, the pollsters. A common account of social media’s persuasive effects provides a convenient explanation for how so many people thought so wrongly at more or less the same time.

Emotional algorithms

Elizier Yudkowsky has always talked about the picture of rationality as analogous to emotion as a wrong one; once we have determined the most rational course of action, we should feel strongly about it. I'd argue for a new classification — emotions are just rationality 1.0; biochemical algorithms honed to make the most rational decision given the African savannah approximately 1 mya. This logic extends to theories of consciousness. The ability to feel is just a rudimentary consequence of evolutionary rationality. To expect artificial intelligences — without natural selection and with the advantages of intelligence human design — to develop consiousness in the human sense is to expect them to regress to a more primitive decision making algorithm.

The problem of course with seeing emotions as outdated logic — something easily biochemically manipulated — is that most of our secular moral calculus is based on the principles of pleasure maximization and pain minimization. Imagine a cyborg human from 2084. This human wears a brain-computer interface. The interface connects by Bluetooth to their phone. Just like the humans in 'Do Androids dream of Electric Sheep,' this human can choose to 'dial up' any emotion they'd like. Punch the code '1654' and you get mild discomfort, punch '5963' and you get extreme nerves, punch '4523' and you get euphoria. Now imagine this human in this transhumanist future can control these emotions through a brain interface, as well.

So this human finds themself at a trolly cart intersection, standing next to a big lever. A trolly cart is hurtling down the tracks, set to run over five people, tied down. That human can pull the lever, which will switch the trolly over to another set of tracks, on which only one person is tied. What do they do? The consequentialist utilitarian calculus is: minimize pain and maximize pleasure. So you pull the lever. But suppose you can gesture to either the five people or the one person. You say: "dial a 5633 — total loss of human emotion for five minutes." Now you can run either the five or the one person over, and neither will feel a thing. Perhaps afterwards, you 3D print them all new bodies and upload Thursday's memory iCloud backup.

Another problem: how do we justify any kind of human-specialness when intelligence — supposedly our greatest quality — is better expressed in artificial substrates? We could say that intelligence is not what makes humans special, that would be our emotional capacity. And yet by that logic humans are no different from any of the other reasonably sentient animals on this earth.

These problems are part of the reason I think society should be able to recommend "philosophy" more vigoursouly as a subject of study and a future career. "What are you going to do with that liberal arts degree, eh?" used to be a jaunt. Now you can say: "Well, I'll just earn $200,000 a year working at OpenAI."

Paying people to vote for good policies.

*Confidence level/s: very speculative, low*

It seems that one of the key optimization problems in democratic decision making is that we often have little incentive to vote for policies that we truly think will increase the public utility.

As per Jess Whittlestone at Vox:

While we have strong social incentives to defend our groups' political beliefs, we have very little incentive to form _accurate_ political beliefs. Most people acknowledge that political issues are incredibly important for society as a whole, but as individuals we’re not necessarily rewarded for how rational and truth-seeking we are about politics.
Consider the contrast between our beliefs about politics and our beliefs about the physical environment immediately around us. If I believe that the pavement ahead of me is clear and I’m wrong — there’s actually a lamppost in my path, say — I’m going to quickly suffer as a result of my false belief. This gives me a clear incentive to form accurate beliefs about the physical environment immediately around me.
The same logic doesn’t apply in politics. If I believe that immigration to my country from other countries is harmful, that false belief doesn’t harm me directly. Even if I vote in an election based on that belief, the chances of my belief actually affecting the outcome of the election are so slim that a poorly chosen vote is unlikely to hurt me personally.

A possibly workable idea to improve the utility-maximizing effectiveness of our politics is based on the premise that we might better at deciding as a *demos* whether a policy was good or not in retrospect than we are at deciding whether a policy will be good in the future.

Thus, imagine a political economy in which every five to ten years after a government institutes a policy/set of policies, a decision oracle (everyone in the country) votes on whether they were good or not. If they are found “good” with a high majority—say 70%—everyone who voted for them gets paid; if they are indecisive (i.e. 50-50) no one gets paid. If they are found “bad”, everyone who voted *against* them gets paid. We could fund this with a zero-sum prediction market, in which citizens could optionally bet on whether the policy would eventually become to be considered successful. Or, we could set up the system as a way to distribute “social dividends/UBI”, which everyone seems to agree are soon coming. Either way, this should have the intended effect of encouraging everyone do a bit more research into the policies they are voting for; voting for the “right” policy is a money-making opportunity. Likewise, there is an additional incentive for a government to make their policy decisively work, as once initially voted for, if the benefit of a given policy is unclear in retrospect (that 50-50 situation), then no one gets to make any money. How do we manage this without giving up voter anonymity? Probably cryptographic digital voting, something along the lines of what Benaloh 1987 proposed. Five years after the fact, you cryptographically sign a message confirming you own the key that, according to the public anonymous voting ledger, originally authorized the favorable policy.

But perhaps there is a better way, incorporating my feedback system with Eric Posner and E. Glen Weyl's Quadratic voting system in which citizens are distributed a yearly budget (or per referenda) of "voting credits" that they can spend or save. If a citizen doesn't feel strongly about a given vote (providing they only consider its effect on them personally), they can save their votes for a future election in which they do feel strongly; the power of their voting credits decreases with a quadratic penalty, however. That means that 1 vote credit will buy you 1 vote, 2 vote credits 1.4142… votes, 100 vote credits 10 votes and 150 vote credits 12.247 votes, etc.

What if we ran the same feedback system as above, with citizens retrospectively judging the success of policies—except this time, citiens who voted wisely are rewarded with more voting credits, and vice versa. With this system, we could also better apply the “negative feedback” aspect of the process, penalizing citizens for voting unwisely. How do we do this? Well, we could alter the square by which someone is penalized for committing additional vote credits to a particular referenda. For example, if you voted more frequently in favor of “good” policies than in favor of “bad” policies over a period of ten years, you find your votes per policies decreased by:


While if the opposite happens, your next ten years have the greater penalty:


Thus, we may have a sort-of solution to John Stuart Mill's quandry over how to assign more voting power to wiser voters, an idea which he ultimately rejected as there is hardly an objective metric by which to sum up "wisdom"; the system would become aristocratic. This way, however, a democracy restrospectively decides its most prudent voters in a decentralised and anonymous fashion.


Its liquor is like the sweetest dew of Heaven
- Lu Yu.

Tea once made a brief and pretty bizarre appearance in my long list of childhood fads--encompassing everything from Lord of the Rings to Claymation--sometime around the ages of 8-12. I was fascinated by the process--I remember watching YouTube videos of tea preparation with great focus. Being an English household, however, Earl Grey was the only drinking option avaible. Not knowing that all varieties of tea are made from the same bush--Camellia sinensis--young me set about picking, rolling, and pan-firing some bay-leaves from the garden. The result, predictably, was some very-much-not-green water with bits of charred bay leaves floating in it.

Now a semi-seasoned green and chai tea drinker, I returned to my old obsession this August after reading Mary Lou Heiss' *The Story of Tea*. The book is pretty all-encompassing; it ranges from a poetic history of the Tea-Horse trade route between the states of the Eurasian steppe, to a detailed discussion of the scientific manufacture processes of each tea style, to a breakdown of the key Organic regulatory certifications.

I was most interested in the history of the beverage. Tea was first consumed in the "vast nexus" of the Assam in northeastern India, and the Yunnan of southwestern China. China, however, is tea's verifiable birthplace. Interestingly, some anthropologists speculate that the first humans to venture out of Africa--Homo erectus--may have discovered the indigenous tea trees of the Yunnan, from curiosity or from mimicking the animals of the area, and ultimately discovering the caffeinating properties of the plant. At least by the Shang dynasty (1766-1050BCE), tea was being consumed for its medicinal properties, and, by the Zhou dynasty (1122-256BCE), tea leaves were being boiled exclusively in hot water. At this point in history, tea became not just an ingredient in the herbal concoctions of early humans, but an invigorating yet bitter drink in its own right.

Tea began its journey as a product of export in 641 CE, when the Tang Princess Wen Chong married the Tibetan king Songsten Gambo and brought tea with her from Sichuan to Tibet. Thus began an trade exchange between the Tibetans and the Tang court that would last untill the 1260s; the Tang wanted strong horses for their militaries, while the Tibetans needed tea as a break from their mundane diet. These exchanges were known as the 'Tea-horse' routes, and stretched from Sichuan and Yunnan to Tibet over the Himalayas.

Tea entered Japan in 815 through the returning priest Saichō, who supposedly served it to the Emperor. By the sixteenth century Japanese tea culture had become a thing of its own with the establishment of Chanoyu, or the way of the tea. During the Manchu period, the Dutch established a trading base in modern Jakarta on the island of Java, from where they purchased tea from China. The lengthy sea trips from the east to the west had the effect of instigating the West's infatuation with black or oxidized tea--to survive the voyage, the Chinese realized they had to allow the tea leaf to darken, before being bake fried. In 1610 the first tea shipment reached The Hague, and from their tea took off amongst the European elite. By 1658 tea was being bought in London, and by 1669, England had sanctioned the English East India Company with a monopoly on trade in the East. The growing English addiction to tea was not met by indirect purchases from the Dutch, and it wasn't long before the English had gone to war over tea (The Opium Wars), before establishing plantations in India, now a tea powerhouse.

Interesting tea facts

  • The Chinese have two words for *Camellia sinensis.* Speakers in the eastern port of Xiamen pronouced the character for tea *te*, while speakers in the southern ports of Canton (now Guangzhou) pronouced it as *cha* or *ch'a*; *cha* is now the dominant Cantonese and Mandarin pronouciation. The legacy of these two ports is in the words for tea in the languages of all those countries who bought from china. Thus:
  • Te Tea Tee The Thee Cha Chai Chay
    Catalan, Danish, Hebrew, Italian, Latvian, Malay, Norwegian, Spanish, Swedish English, Hungarian Afrikaans, Finish, German, Korean French, Icelandic, Indonesian, Tamil Dutch Greek, Hindi, Japanese, Persian, Portuguese Russian Albanian, Arabic, Bulgarian, Croatian, Czech, Serbian, Turkish

    In 1848, ..., the English hired the Scottish botanist Robert Fortune to dress as a Chinese businessman and go undercover in Fujian Province with the intent of collecting tea plants and learning the CHinese processes for manufacturing both green and black tea. With the help of Chinese accomplishes Fortune's subterfuge was successful. He returned triumphant, with smuggled tea cuttings, technical information, and more than eighty Chinese tea specialists who were ready to put their knowledge to work in India. The sesequent bushes propagated from cuttings and seeds he smuggled out of China numbered more than twenty thousand plants.

    Is moral philosophy masquerading as religion in its claim to have universal laws of morality?

    From Aeon:

    These leading ideas – of rational action, of the value of happiness, and of achieving the best that our nature affords – are grand ideas. In their grandeur, they can once again remind us of some of religion’s grand ideas. For example: that the evil of the world is explained by the possibility of redeeming it by the sacrifice of an innocent God. Or that we are absolutely predestined to hell or to heaven, yet must strive to act as if what we do could change that. And very much like the debates over those theological topics, the debates among the foundations of morality are irredeemably insoluble.

    Of course, utilitarianism has its flaws--how to solve the direct opposition of pain and pleasure, how to quantify these occurances in a cost/benefit analysis?

    The categorical imperative goes a long way to curtailing the evils of Moloch....

    Likewise, all three secular moral lenses ignore the fact that pain and pleasure are not homogenous--one man's pain is indeed another man's pleasure.

    Hitchens said we all had a universal morality backed into the human genome --egalitarianism seems to be a survival advantage, at least of a selfish, human only kind.

    Where do we look? Not to intuition.

    When feeling bound by a moral rule in that special way, the rule’s transgression, by oneself or others, is liable to trigger ‘moral’ emotions such as guilt or indignation. A Nazi might feel indignant at his colleague’s lack of zeal in persecuting Jews. A fundamentalist jihadist might feel guilty for secretly teaching his daughter to read. Deciding between good and bad moralities will once again lead to a wild-goose chase after foundations.

    I mean, we care about morality in a secular way because... why exactly? It's *useful*? Useful for [insert maximizing pleasure and minimizing pain]?

    Where does guilt come from? It can hardly be universal in application.

    Should moral decisions loose their special value? Maybe we can just have *reasons*, not moral reasons...

    From Reddit, I don’t remember where:

    The way echo chambers work seems to be popularly mis-explained. How’s it’s explained: everyone you encounter agrees with you. How it actually works: everyone you encounter who you disagree with appears to be insane or evil. Next time you encounter someone who disagrees with you, you expect them to be insane or evil, causing you to act in a way that seems to them to be insane or evil.

    A wonderfully absurd quote, from the generally wonderfully absurd From Le fabuleux destin d'Amélie Poulain:

    [Amelie hands a begger some money] Beggar: Sorry madam, I don't work on Sundays.

    From Eats, Shoots and Leaves:

    [The old New Yorker writer] Thurber was once asked by a correspondent: "Why did you have a comma in the sentence, 'After dinner, the men went into the living room." And his answer was probably one of the loveliest things ever said about punctuation. "This particular comma," Thurber explained, " was Ross's way of giving the men time to push back their chairs and stand up. "

    It is ironic that we find it so hard to love or empathize with artificial/made things, in spite of our assertion that a human coming about without God in the equation is a equivalent to teaching kids they are "mindless animals." Surely made things (like us, supposedly) are truely important. Anything that came about through "natural" processes is cold and cruel and meaningless.... That is not what we think when we say that the robot dog is “unnatural”, ew…

    From the New Scientist:

    An array of rat brain cells has successfully flown a virtual F-22 fighter jet. The cells could one day become a more sophisticated replacement for the computers that control uncrewed aerial vehicles or, in the nearer future, form a test-bed for drugs against brain diseases such as epilepsy.

    Be careful when choosing to adopt new technologies:

    Socrates famously warned against writing because it would “create forgetfulness in the learners’ souls, because they will not use their memories.”

    Ordinary common things that are pointless

  • Garden lawns. Total waste of water and space for purely aesthetic purposes, people don't realize they are living out the capricious tastes of 18th century French aristocrats.
  • Possible alternatives:

  • - deserts
  • - Moats
  • - Concrete
  • Windows. A waste of valuable wall space for the small fraction of time you actually want to see what is outdoors. How about massive TVs which can project what is outdoors, but spend the rest of their time being art or computer screens or windows to life in other places?
  • Highways. Flying drones of the like tested in China or made by Urban Aeronautics would eliminate highways or traffic; responsible for an extraordinarily large amount of time wastage. As per:
    The United States Census Bureau reports that the average American spends 26 minutes getting to work, a figure that's increased by nearly five minutes since the early '80s. For those working 50 weeks a year, that means nine whole days will be spent commuting.

    And for city commuters:

    According to the Texas A&M Transportation Institute's Urban Mobility Scorecard, the average American commuting to and from an urban center will spend 42 hours sitting in traffic every year. (If you worked for 35 years and this remained constant, you'd be spending more than 61 days stuck behind the wheel. Oy.)