Week 3. Counting your notes before they hatch

“I can keep perfect time. Some call me the Human Metronome. You notice how I’m always on time? I’m never late for things.”                                                                        — George Michael Bluth

Time signatures are something that I know how to use without really knowing how I’m using them, why I’m using them, or what they really mean. As I’m learning, their purpose is to indicate to a musician how to count each beat so that the music is played as it was written to be played and so that when multiple musicians play together they are on the same page, so to speak.

Time signatures are written like a fraction. The top number tells the musician how many beats to count in a measure. This number usually is between 2 and 12, but could be any number, really. The bottom number corresponds to what kind of note will be counted. A 2 in this spot indicates half notes will be counted, a 4 indicates quarter notes, an 8 indicates eighth notes, and so on. Theoretically any number can go in the denominator, but the most common are 4, 8, and 16.

To illustrate: In a 4/4 time signature all notes and rests must equal four quarter notes in each measure. The player knows there are four because the top number is four, and knows that they are quarter notes because the bottom number is four. This does not mean that each bar will contain solely four quarter notes (if that were the case music would be much easier to play and much less exciting to listen to). It means that any eighth notes or sixteenth notes or rests or half notes must all combine together to equal four quarter notes in each measure and that each beat is a quarter note in length.

Below is a short example of a ¾ time signature. Each beat in the song is the length of a quarter note and there are three beats in a measure. It is from Franz Lehár’s “The Merry Widow” (or, as they say in German, “Die lustige Witwe”).

 

 

Time signatures

Common music time signatures

Music theory for young students 

Week 3. Long distances, subjective ingredients and a brief history of time

Tennessee Williams once wrote that “Time is the longest distance between two places,” meaning, I think, that time — because it is finite — is the only real measurement that matters between two points. Humans have yet to find a distance or a depth that is unreachable — at least in the course of a life (which might be thought of as the time allotted us).

But what, exactly, is time? That Facebook just invented a new unit of time (the flick) should say all that needs to be said about time’s human construct. Because for all the as astronomical preciseness, time is still just an imprecise calculation designed to position us at a point in the universe, in history, and in our own existence.

Time, as we understand it, developed over several thousand years, starting around 3,000 BCE, when evidence first appears that the Chinese had measured a 366-day year based on the movements of the sun. A thousand years later they had a 12-month calendar — which included an occasional 13th month. Two thousand years after that and they had recognized what we know today as precession and came to understand that every 300 years, the 12-month calendar would no longer match with the seasons.

In the west, precession was recognized and accounted for in the name of religion. After Constantine established the Easter holiday on the spring equinox, the date of the holiday continued to change (it is, after all, an astronomical event), so in 1582 Pope Gregory XIII implemented the Georgian calendar. The calendar, the one most of us use today, keeps the spring equinox around March 20-21 through the use of the leap year.

As for the more precise measures of time, those which can’t simply be observed by us layfolk through the rising and setting of the sun, their origin is not entirely known. Theories posit that the 24-hour day was devised to match the 12-constellation zodiac cycle. Others speculate that 12 fit nicely into 60 — which fit nicely into the 360-day year.

For being the thing upon which our lives mostly depend — from how long we work each week to how long we sleep each night to how long we live — time is awfully imprecise. Of course, we’ve modified our calendars and walk around with cellular-network-enabled watches in our pocket. But we still live and die by a measure based on our position in the universe, by our relative relation to the sun and the stars. When we are is based on where we are — though in our daily lives that seems to have mostly reversed.

Huw Price, a Cambridge University professor of philosophy, says that the absolute direction of time — the sense that we are constantly hurtling toward the future — exists only in our minds, the result of a “subjective ingredient,” a “temporal perspective” that we project onto the environment around us. Similarly, the passage of time and the existence of moments, he says, are mental constructs. Rather, based on Einstein’s theory of relativity, the block universe that we live in is tenseless: the past, present and future are all equally real.

Massachusetts Institute of Technology physicist Max Tegmark describes this block universe theory of time like a DVD. You watch the movie on the DVD and the movie itself plays and there is a drama and things are happening and changing, but nothing about the DVD changes. Thought of through this lens, the only measure of change — as a product of the passage of time — is memory. (Memory and its relationship with time might be worthy of its own Breaking Eggs entry.)

On the other hand, theoretical cosmologist Andreas Albrecht makes the argument that time exists in relativity. “When you try to discuss time in the context of the universe, you need the simple idea that you isolate part of the universe and call it your clock, and time evolution is only about the relationship between some parts of the universe and that thing you called your clock,” he says.

All of which leads me to believe that change may be the best way to consider time. If time itself is merely a construct, it is at least the reflection of change, whether its the change in sunlight as the day progresses, or the flow of traffic or the accumulation of wrinkles and the loss of hair — time is merely the brain’s way of making sense of change, giving order to the chaos through some linear construct.

Some, of course, have continued to argue for the existence of time as an instrumental piece of our universe.  But it seems farfetched. Still, as I sit here mulling it over, it’s hard to imagine a world without time, or a facet of existence that is not built upon time.

To bring back Tennessee Williams, the full passage is, “I didn’t go to the moon, I went much further — for time is the longest distance between two places.” Time, real or not, is imperative to our personhood, to our development and growth, to our progress and our stability — time and our memory of change, is how we measure our maturity and, ultimately, at any given moment, how we know who we are.

To get a much better and more clear picture of time and complex arguments behind its existence, read Robert Lawrence Kuhn’s great “The illusion of time: What’s real?” which is where I gathered a great deal for this piece. 

Sources:

Week 2. The Root of the Matter (or, The Tooth about Root Canals)

Sometimes it feels like there is a tiny timpani playing inside your molar. Its constant beat is almost calming and it radiates a sort of warmth. The dental assistant says Oops, we have to do one more X-ray, open wide, and the dentist says, Looks like we’ll have to do a root canal, which isn’t a very big deal, don’t worry. Sorry, denticle drummer, we’re going to have to let you go.

A simple anatomy of a tooth

Commonly called a root canal, endodontic therapy is a type of treatment for the damaged insides of a tooth. Its name comes from the Greek, with the roots endo meaning “inside” and odont meaning “tooth.” The expedition inside the tooth is to remove infected pulp, which is in an area called the pulp chamber. Tooth pulp is vivacious. It is composed of living tissue, blood vessels, and cells with the name of a superhero: odontoblasts. The pulp’s primary function is to form dentin, which is the layer above the pulp chamber and helps protect the tooth. It is also nutritive (it keeps the surrounding mineralized tissue happy with nutrients and moisture) as well as sensory.

When the pulp becomes inflamed or infected it becomes sensitive (very—your tongue trains the coffee away from the tooth and you throw the rest of the Junior Mints in the trash) and it must be removed from its chamber. To do this, the well-paid endodontist creates an opening called an access cavity in the tooth’s crown and uses a root canal file to clean out nerve tissue, bacteria, toxins, and other debris. After the putrid pulp is removed from the chamber and root canals, a rubber compound called gutta-percha is inserted to seal the tooth.

Gutta-percha comes from a tree of the same name (Latin palaquium gutta). The natural latex produced from the sap of these trees has been used for sundry industrial and domestic purposes, most notably as insulation for underwater telegraph cables. It is a very flexible material, happy under the ocean or inside a tooth’s pulp chamber. Once the gutta-percha is placed to keep the tooth from being reinfected with bacteria, the access cavity is then filled with either a temporary or permanent crown (depending on how robust the patient’s insurance plan is) and the process is complete.

At the end, you’re out some dental pulp and about $1,000 but you’re the proud owner of a wad of gutta-percha and you can drink coffee without flinching.

A root canal illustrated

American Association of Endodontists

Wikipedia: Pulp (tooth)

Step by Step Root Canal

The Root Canal Procedure

Week 2. Honduras, palm oil and the repercussions of modernization

Nearly 25 years ago, the World Bank invested in a small jungle valley in Honduras. The land program the World Bank used, which lends money to impoverished countries around the globe, was ostensibly designed to bring much-needed wealth to rural communities through modernization. In this case, it involved loaning some $30 million to palm oil giant Dinant to help them buy up a few thousand acres of land in Bajo Aguán.

The plan was never popular in Bajo Aguán, but then-President José Manuel Zelaya — a leftist who raised the minimum wage by 80% and introduced generous subsidies for farmers — was a bastion against the complete exploitation of the locals, which kept a lid on the tensions. After he was ousted, though, in a 2009 military coup, conditions in Bajo Aguán rapidly deteriorated.

A 2015 investigation by the International Consortium of Investigative Journalists found that since 2010 at least 133 killings were linked to land disputes in the area. Primarily the violence has been two-sided, with both locals and Dinant (and other corporate landholders) accused of beatings, torture and murder.

Elsewhere in the country, 109 environmental activists have been murdered since 2010 for standing up against dams, mining operations and agricultural projects, according to a report from Global Witness.  The most notable of which is Berta Cáceres who was shot to death in 2016. Cáceres, an internationally renown environmentalist who had most notably succeeded in halting the construction of the Agua Zarca dam in Rio Blanco, was killed in a safe house after telling her friends and daughter to prepare for a world without her.

Honduras, of course, is just one example of the repercussions of modernization and the pressures put upon developing nations by the Global North. In many ways, it is a forgotten poster child: The rate of murders in the country was 42.8 per 100,000 last year (down from 85.5 per 100,000 in 2011); it is one of the two poorest countries in the Americas, despite being resource-rich; and it bore the brunt of American intervention — or in this case, lack thereof.

In the wake of the 2009 military coup that toppled Zelaya, the United States (and, many like to note, Secretary of State Hillary Clinton) was one of the only countries in the international community that refrained from calling the act a “coup.” The argument that Clinton and co. made was that to label the military intervention — which exiled Zelaya to Costa Rica and threw the country into chaos — a coup would mean that the U.S. would be required to cancel all aid to the country. But here we are, years after the democratically elected president was ousted, and Honduras has remained unstable.

I don’t mean to pick on Honduras, or to write flippantly about the strife that many millions of people are enduring. What I mean to do is work through my own knowledge about repercussions and about the consequences of modernization, and to get a sense for the struggles that activists around the globe face in the name of their righteous cause.

Berta Cáceres, for instance, was a dogged environmentalist who would have fit nicely into the American narrative of going green. But her most notable accomplishment was halting the construction of a hydroelectric dam — exactly the sort of project that American activists would be pushing for to undercut reliance on pollutants like coal and gas.

In Bajo Aguán things are more clear-cut, but still vague. Dinant is the sort of easily recognizable corporate landowner that is taking advantage of vulnerable communities, but they are operating with money from the World Bank doled out in the name of modernization, purportedly to help farmers and peasants adapt to a global world by injecting jobs and cash into communities.

And while these two brief stories don’t negate the importance of social, cultural and environmental progress and modernization, it’s easy to forget the reverberations; that nothing happens in a vacuum, and — even more — that everything is invariably inmeshed in a complex web of connections and crescendos: America support a military coup, drives demand for palm oil, lowers the cost of gas and inflates the value of renewable energy, and, in the process, forgets that people — often times people halfway across the world and living entirely foreign lives — make it all possible.

 

Sources:

Week 1. From Backrub to Google: Wrestling with what’s known (and what’s not)

In 1996 a pair of friends wrote a program in their dorm room that crawled, cataloged and generally organized what was, at that time, the modest expanse of the internet. Backrub, as it was called then, got a small investment, moved into a garage and became Google.

Today, Google — and its parent company Alphabet — is, in many ways, the backbone of the internet, the trunk from which the webbed branches of the world wide web grow and one of the largest hubs of information that has ever existed. Despite being a software so ingrained into daily life that is has become functionally invisible, the most basic things that Google does (and those which allow it to generate more revenue than many countries) remain a mystery.

At its most simple, Google is a search engine that functions by performing three basic tasks: crawling the internet, indexing content, and, upon command, retrieving what’s been indexed. In action, Google’s software essentially visits every webpage that’s ever been linked to (crawling), makes a copy of the page (indexing) and then promptly repeats, following every link on that page, making a copy of those pages and following every new link ad infinitum. This indexing process generates massive amounts of data (dated estimations guess that Google stores some 15 exabytes — 15 million terabytes or 30 million personal computers worth — at any given time). This inundation of data makes Google’s ability to retrieve search results in a fraction of a second all the more impressive. It’s also why the final function of a search engine is arguably the most important: the retrieval algorithm.

Google’s algorithm is both beautiful and terrifying. Parsed down to generalities, when you enter a search query, Google uses an algorithm known as PageRank that helps to sort search results by two factors: relevance and ranking. But nothing is so simple as it sounds, especially not online. Google’s way of measuring relevance and rank is shockingly personal and it’s likely that no one knows us as well as Google does.

Odds are, if you are like me, when you are logged onto your private computer, you’re logged into Gmail, which means you’re logged into Google, which means that every time you search something, Google uses an algorithm developed specially for you, based on billions of factors — your search history, your browser history, your shopping habits, where you are, where you have been, what devices you are using, your demographic, your family’s demographic and dozens or hundreds or thousands of other factors that we (the unprivy internet novices) don’t even understand are important, but that Google has thought to track. And these factors and the search results they generate create a sort of personalized internet. An internet not so cloistered as the “social media bubble” that troubled so many after 2016’s elections, but one that nonetheless holds the power to skew our perception of knowledge and information.

Which brings me to a common personal refrain: Should I be alarmed? I love how seamlessly Google does everything I ask of it, and the collection of data is what makes their service work: They know everything I could ever need to know before I even ask. It’s delightful and unsettling all at once. (The My Activity page that brazenly packages everything you do on the internet as a sort of personal convenience is a small example of just how much power Google knows that it holds.)

And Google, of course, is just one of many. Our devices and desire for constant connectivity have bulldozed a path for dozens of innocuous-seeming services to make hundreds of billions of dollars off of us — the information that makes us individuals, all of it bought and sold thousands of times over so that when we open a page we see an add and suddenly desire a new pair of boots, even though we just bought the exact pair we thought we wanted.

Admittedly, Google might have been a lot to bite off for the first of what will hopefully be many blog posts throughout the year. But I guess I’m hoping to wrestle with things — with my apathy and my doomsayer inclinations, and, more broadly, to understand and engage with the many great unknowns of the world. And what is more unknown than everything about me that has been crawled, indexed, broken down to ones and zeroes and stored on some server in a far away state to sell me a new pair of boots?

Sources:

Week 1. What the Eye Doctor Saw: A (Very) Brief History of the Beginnings of Esperanto

In the 1870s and 1880s the optimistic ophthalmologist Ludwik Lejzer Zamenhof of Bialystok, Poland created Esperanto as a way of bridging cultural conflict. One of his main motivations was to reduce the struggle of cross-linguistic communication. He lamented the time and toil spent learning foreign tongues. Even with great effort, it is difficult, he writes, to “converse with other human beings in their own languages.” He rued the effort and money wasted in translation, which provides only a “tithe of foreign literature” to the reader. He declared that, in addition to the difficulty of learning a foreign language well, “there are but few persons who can even boast a complete mastery of their mother tongue.” Zamenhof found words and expressions borrowed from other languages as signs of linguistic poverty and bemoaned that we are “obliged […] to express our thoughts inexactly” using phrases from other tongues.

Zamenhof’s complaints did not lie solely with the problem of perceived inarticulateness. He saw unlike languages as barriers to solidarity. “Difference of speech,” he writes, “is a cause of antipathy, nay even of hatred, between people.” From his viewpoint, the “strange sound” of other languages keeps people aloof and distant and only serves to heighten cultural differences. Zemnhof, therefore, saw the use of an international auxiliary language (auxiliary being a key term—it was not his intention to replace first languages) as a way to global peace. He predicted that science and commerce, too, would receive a boon with the introduction of an international idiom.

Zamenhof’s visionary ideas never quite reached fruition. You probably don’t speak Esperanto. You probably don’t know someone who does. However, it is indeed an international language with a comparatively healthy speaker population. Navajo, the most-spoken Native American language in the U.S., has about 170,000 speakers; Esperanto has up to 2,000,000, with at least 1,000 native speakers. The constructed language so far hasn’t ended any wars, but it offers a sense of community, if not for the entire world, then at least for thousands and thousands of hopeful hobbyists.

Sources:

Dr. Esperanto’s International Language, Introduction & Complete Grammar

Ethnologue: Esperanto

Ethnologue: Navajo