Thoughts on ecological activism

Before I return to my normal posts on Linguistics and Speech research, I have one more thought on my post-ICPhS trip to Cairns. After the dive, I went to the edge of the rain-forest on a half-day 4×4 tour. It was more sitting and less walking than I would normally go for, but the views were pleasant.

The trip showed us the amazing strangler fig, which is essentially an immortal tree that has serious ill-intent with the trees it grows next to. If you are dumb enough to grow near one of these monsters, within 100 years you are dead, dead, dead!

And the waterfall we went to at the end of the trip was stunning.

But there was one long part where the guide had us standing still for 30 minutes listening to a discussion of local wildlife mixed with the usual guilt-trip about ecological destruction. In one sense, that is fair enough. Humans have an enormous impact on this planet, and plenty of it is negative. But in another sense, I just wanted to crawl out of my skin. Not because I felt guilty for what I’ve done, but because I have absolutely no idea how this approach can help make the world a better place.

I can appreciate that the Australian government is not letting Cairns reuse brown-space for a new boat launch but instead is forcing them to tear down a valuable mangrove. But I can’t do anything about it. I am not Australian, I don’t vote in Australia, and I can’t force the Australian government to save the mangroves. Even though I would LOVE to because I want the Great Barrier Reef to keep growing spectacular fish! There was also a lot about how tourists should support family businesses over large-scale tourism businesses.

But it went to long. We had old people on this trip, and one of them had lost circulation in her legs listening to the over-long presentation. She fell trying to walk back to the vehicle after the talk. She wasn’t badly hurt, but that is the kind of thing that can break a hip, greatly shortening the life of the elderly person in question!

The guide also complained about the large influx of population into Cairns, who then demand a quieter place that involved cutting trees bats live in, and otherwise reducing the wonders of nature in the area to make the place more like the big cities they came from. Fair enough, but I heard no solutions. And I thought “stronger insulation and noise-control laws, or education about good construction standards, would end that nonsense.” I though “there are really effective solutions that we can implement ourselves, so tell everyone about them!” And as a result, I was frustrated because of the missed opportunity.

I compare this approach to that of Reef Encounters. They brought us to a beautiful place full of natural wonders. When we complimented them on their good job, they made it clear it was *nature* that did the good job, and we all benefit from what nature does. When we went diving, the guides always picked up any trash they saw on the ocean floor, and taught us to do the same. When the great food was served and the good times were had, they thanked us for supporting a local family business instead of one of the large-scale tourism businesses.

And there it is. They let nature speak for itself. They embodied solutions. They did a great job and thanked us for supporting local businesses *after* they did that great job. People who experience such things will appreciate nature, know how and engage in good ecological behaviour, and continue to make better choices for local communities.

So here is to all those who embody good ecological behaviour, cleaning up after themselves and others. Here’s to the people who build improved technologies that waste less and are more efficient. Here’s to those who keep track of nature – and trade – exposing it to the light where it can be made as good as possible, a little better every day. And yes, here’s to those who vote to preserve mangroves and re-use brown space for boat-docks.

Diving the great barrier reef

After the International Congress of the Phonetic Sciences in Melbourne, my friend Phil Howson and I went diving in the Great Barrier Reef off the coast of Cairns. The trip was truly amazing. During this time, I did 10 dives, 5 of them to train for advanced open-water conditions – diving to 30 meters (100 ft).

The conditions were absolutely amazing, as you can see from the boat shots from the professional photographer (these are all Tilly’s shots, I saw similar things, but I do not have either the gear nor the eye to take shots like this!)

My friend Phil and I had a lot of fun, above and below water.

And the reef was amazing.

And that was just the coral. I most definitely found Nemo. Often. More often than Tilly photographed them.

And I might have encountered a couple of elder things. Tilly even got a shot with the face-hugger look. For me, the cuttlefish was always closed like photo 1 and 2.

I saw lots of little fish like these.

And crazy schools of fish – some even more impressive than these.

I cannot count the number of times I saw scenes like this, but with much wider views and more variety of fish.

I saw rays quite often.

And I played light with heaps of these little doggies of the sea. If you ever told me I’d ever play light with a shark, I’d have called you barking mad! I clearly have no actual sense! (Sharks tend to like the light as they use it to catch fish, but other fish such as fusiliers are super-keen on using your light and they will surround you like crazy!)

I swam with these turtles, but I did NOT see the one eating the jellyfish. That my buddy saw, and of course Tilly taking the photos.

And I even have some proof of swimming with the turtles.

I also enjoyed the slower creatures. Giant clams!

Unfortunately, I did not see the moray pictured here. Tilly got great shots though!

And I never saw a starfish on the trip either, though we do have shots from Tilly.

But, I did see these guys:

This trip was truly amazing. It really does look like this under the ocean at the Great Barrier Reef, and even more amazing than this. My first night-dive was a kaleidoscopic fever-dream better than my wildest imaginings. I cannot recommend diving enough.

EDIT: I now have a photo of my deep dive to 30M during dive training. The depths are an eerie place, where cracked eggs stay intact, and red tomatoes look green. They are worth a quick and carefully planned visit. Running out of air is EASY. On my training my instructors deliberately shared air with me, and I deliberately used the back-up bottle at 5M depth, as skill practice.

ICPhS 2019

The Nineteenth International Congress of the Phonetic Sciences was held in Melbourne from August 5-9, 2019. It was an amazing success with over 950 delegates, and a nearly unlimited opportunities to forge new collaborations and improve the quality of phonetic science research worldwide.

I was especially impressed with the entire science committee who organized over 400 reviewers for the conference, dealt with difficult-to-administer programme software, and kept every talk and poster well coordinated despite the inevitable last-minute changes. Paola Escudero, Sasha Calhoun, and Paul Warren are to be commended!

I also commend Rosey Billington of the social media liaison. Social media was the knife’s edge between success and failure. I’m not good at the stuff to the point of having to effectively leave facebook of late, but I admire those who can bend social media to their will – especially when their will is goodwill.

I also commend the keynote speakers. My former PhD supervisor Bryan Gick made an amazing presentation on how bodies talk. I really enjoyed seeing the old research, and seeing the new stuff I haven’t been involved with as much. It was great to see Connor Meyer is joining in on writing a new book on that same topic – I await it with great anticipation!

Lucie Menard presented on “Production-perception relationships in sensory deprived populations: the case of visual impairment”. Her talk really helped me see how seeing helps with speaking. I cannot recommend reading her papers enough.

And of course the media darling of the event was Jonas Beskow on “On talking heads, social robots and what they can teach us”.

His talk shows us some of the state-of-the-art on human-robot interactive systems, which while super-interesting, also strongly points out to me how much we can still do to improve human/computer interaction. We have only just begun to exploit such opportunities.

Visual Prosody

I also really enjoyed the visual prosody contest – the poster on the left showed a method of highlighting both pitch and intensity at the same time. Visual prosody requires innovative techniques for showing multi-dimensional information in an intuitive way that people can grasp using the built-in abilities of their visual systems. I intend to write a blog post on this topic, highlighting the incredible multidimensionality of some of the greatest visualizations used in data presentation today – weather maps. The best of these present rain, wind, maps, and pressure systems all at the same time, and in a manner nearly anyone can decipher instantly with but a little training.

The conference dinner was also fun, with really good food and a spectrogram contest with participants who were insanely fast. The winners of two of the contests had answers before I could even finish a draft segmentation. I’m not sure who taught them to read spectrograms faster than I read text, but someone did, and I was impressed!

I was glad to be a member of the organizing committee, despite being quite bad at getting corporate sponsors. I contacted over 200 companies, and got 0 sponsors. We had only a couple, mostly publishers, and mostly organized by other committee members. Only one company contacted us on their own. If I were to do it again, I would have contacted the previous delegates from 4 years before, and asked each three questions: “What research tools do you use that you like? What have you bought in the last year? What is the contact information for the salesperson who sold you those items.” With this information, it becomes possible to build a database of exactly how we as phonetics researchers can benefit companies, with contacts to those who would care the most.

TreeForm updated to 1.1

After 11 years, I have finally written a new update to TreeForm to address the issues people have been having with incompatibility with newer operating systems. This version of TreeForm is also a LOT easier to run and install. On Windows and Linux machines with Java Installed, you can just click and run the JAR file directly – all the menus and help screens are incorporated directly, and the file runs anywhere. This helps a LOT with university computers where you often are not allowed to install software.

On Macs, I have provided a package that will install the software. It still requires permission through the “security and privacy” tab of “system preferences”, but after that it will run, even on the Mojave operating system. (If anyone knows how to get Apple to let this app install without activating their “gatekeeper” program, please let me know. I really do not enjoy Apple’s war on open-source programs.)

This version also fixes the color chooser bugs that Oracle inadvertently introduced, allowing color choices to be available again. I have also updated the help, about, and what’s new screens. Lastly, I disabled the custom look and feel – TreeForm now obtains the operating system’s look and feel, and so is slightly different on each system.

You can always find TreeForm at SourceForge, and I will soon put the new source on gitHub.

Building a cleaned dataset of aligned ultrasound, articulometry, and audio.

In 2013, I recorded 11 North American English speakers, each reading eight phrases with two flaps in two syllables (e.g “We have editor books”), and at 5 speech rates, from about 3 syllables/second to 7 syllables/second. Each recording included audio, ultrasound imaging of the tongue, and articulometry.

The dataset has taken a truly inordinate amount of time to label, transcribe (thank you Romain Fiasson), rotate, align ultrasound to audio, fit in shared time (what is known as a Procrustean fit), extract acoustic correlates, and clean from tokens that have recording or unfixable alignment errors.

It is, however, now 2019 and I have a cleaned dataset. I’ve uploaded the dataset, with data at each point of processing included, to an Open Science Framework website: I will, over the next few weeks, upload documentation on how I processed the data, as well as videos of the cleaned data showing ultrasound and EMA motion.

By September 1st, I plan on submitting a research article discussing the techniques used to build the dataset, as well as theoretically motivated subset of the articulatory to acoustic correlates within this dataset to a special issue of a journal whose name I will disclose should they accept the article for publication.

This research was funded by a Marsden Grant from New Zealand, “Saving energy vs. making yourself understood during speech production”. Thanks to Mark Tiede for writing the quaternion rotation tools needed to oriented EMA traces, and to Christian Kroos for teaching our group at Western Sydney Universiy how to implement them. Thanks to Michael Proctor for building filtering and sample repair tools for EMA traces. Thanks also to Wei-rong Chen for writing the palate estimation tool needed to replace erroneous palate traces. Special thanks to Scott Lloyd for his part in developing and building the ultrasound transducer holder prototype used in this research. Dedicated to the memory of Roman Fiasson, who completed most of the labelling and transcription for this project.

Tutorial 4: Coin-toss for Linguists (Central Limit Theorem)

Here is a basic demonstration of how randomness works, but because I am writing this for linguists rather than statisticians, I’m modifying the standard coin-toss example for speech. Imagine you have a language with words that all start with either “t” or “d”. The word means the same thing regardless, so this is a “phonetic” rather than “phonemic” difference. Imagine also that each speaker uses “t” or “d” randomly about 50% of the time. Then record four speakers saying 20 of these words 10 times each.

Now ask the question: Will some words have more “t” productions than others?

The answer is ALWAYS yes, even when different speakers produce “t” and “d” sounds as completely random choices. Let me show you:

As with most of these examples I provide, I begin with code for libraries, colors, and functions.

library(tidyverse)
library(factoextra)
library(cluster)

RED0 = (rgb(213,13,11, 255, maxColorValue=255))
BLUE0 = (rgb(0,98,172,255, maxColorValue=255))
GOLD0 = (rgb(172,181,0,255, maxColorValue=255))

Then I provide code for functions.

randomDistribution <-function(maxCols,maxRep,replaceNumber,cat1,cat2)
{
distro = tibble(x=c(1:maxCols),y=list(rep(cat1, maxRep)))
for (i in sample(1:maxCols, replaceNumber, replace=TRUE))
{
distro$y[[i]] <- tail(append(distro$y[[i]],cat2), maxRep)
}
distroTibble = tibble(x = c(1:(maxCols * maxRep)), n = 1, y = "")
for (i in c(1:maxCols))
{
for (j in c(1:maxRep))
{
distroTibble$x[((i-1)maxRep)+j] = i
distroTibble$n[((i-1)maxRep)+j] = j
distroTibble$y[((i-1)*maxRep)+j] = distro$y[[i]][j]
}
}
return(distroTibble)
}

randomOrder <- function(distro) { distro %<>% mutate(y = case_when(line %in% sample(line)[1:100] ~ "d", TRUE ~ y)) %>%
ungroup() %>% group_by(x, y) %>% summarize(count = n()) %>%
mutate(perc = count/sum(count)) %>% ungroup() %>%
arrange(y, desc(perc)) %>% mutate(x = factor(x, levels=unique(x))) %>%
arrange(desc(perc))
return(distro)
}

And now for the data itself. I build four tables with 20 words (x values) and 10 recordings (n values) each, with the recordings labelled in the “y” value. I start by labeling all these “t”, and then randomly select half of the production and call them “d” instead of “t”. I then compute the percentage of each variant by word (x)

I also combine the four speakers, and do the same for all of them.

D1 <- randomDistribution(20,10,"t")
D2 <- randomDistribution(20,10,"t")
D3 <- randomDistribution(20,10,"t")
D4 <- randomDistribution(20,10,"t")
D5 <- bind_rows(D1,D2,D3,D4)

D1 = randomOrder(D1)
D2 = randomOrder(D2)
D3 = randomOrder(D3)
D4 = randomOrder(D4)
D5 = randomOrder(D5)

Now I plot a distribution graph for all of them. Note that some words are mostly one type of production (“d”), and others are mostly the other production (“t”). This inevitably occurs by random chance. And it differs by participant.

However, even when you pool all the participant data, you see the same result. This distribution is a part of the nature of how randomization works, and needs no other explanation other than this aspect of randomization is a part of the nature of reality.

D1 %>% ggplot(aes(x=x, fill=y, y=perc)) + geom_bar(stat="identity") + scale_y_continuous(labels=scales::percent) + ggtitle("group 1")

D2 %>% ggplot(aes(x=x, fill=y, y=perc)) + geom_bar(stat="identity") + scale_y_continuous(labels=scales::percent) + ggtitle("group 2")

D3 %>% ggplot(aes(x=x, fill=y, y=perc)) + geom_bar(stat="identity") + scale_y_continuous(labels=scales::percent) + ggtitle("group 3")

D4 %>% ggplot(aes(x=x, fill=y, y=perc)) + geom_bar(stat="identity") + scale_y_continuous(labels=scales::percent) + ggtitle("group 4")

D5 %>% ggplot(aes(x=x, fill=y, y=perc)) + geom_bar(stat="identity") + scale_y_continuous(labels=scales::percent) + ggtitle("all groups")

And you can see that the combined data from all four speakers still shows some words that have almost no “d”, and some words have very few “t” values.

Because a purely random distribution will generate individual words with few or even none of a particular variant, even across speakers, you cannot use differences in the distributions by itself to identify any meaningful patterns.

And that is the “coin toss” tutorial for Linguists – also known as the central limit theorem. The main takeaway message is that you need minimal pairs, or at least minimal environments, to establish evidence that a distribution of two phonetic outputs could be phonemic.

Even then, the existence of a phonemic distinction doesn’t mean it predicts very many examples in speech.

Tutorial 3: K means clustering

One of the easiest and most appropriate methods for testing whether a data set contains multiple categories is k-means clustering. This technique can be supervised, in that you tell the computer how many clusters you think are in the original file. However, it is much wiser to test many k-means clusters using an unsupervised process. Here we show three of these. The The first one we will examine is the “elbow” method, runs several clusters, and produces a graph that visually lets you see what the ideal number of clusters is. You identify it by seeing the “bend” in the elbow. Here’s some code for generating a very distinct binary cluster and running the elbow test.

library(tidyverse)
library(factoextra)
library(cluster)
points = 10000
sd1 = 1
sd2 = 1
mu1 = 0
mu2 = 6
p=integrate(min.f1f2, -Inf, Inf, mu1=mu1, mu2=mu2, sd1=sd1, sd2=sd2)

G1 <- tibble(X = rnorm(points, mean = mu1, sd = sd1),
Y = rnorm(points, mean = 0, sd = sd1),
Name="Group 1", col = GOLD0A,Shape=1)

G2 <- tibble(X = rnorm(points, mean = mu2, sd = sd2),
Y = rnorm(points, mean = 0, sd = sd2),
Name="Group 2", col = BLUE0A,Shape=2)

G <- bind_rows(G1,G2) p2 = length(G$X[G$Name=="Group 1" & G$X> min(G$X[G$Name=="Group 2"])])/points

p2 = p2 + length(G$X[G$Name=="Group 2" & G$X< max(G$X[G$Name=="Group 1"])])/points
p2 = p2/2
fviz_nbclust(G[, 1:2], kmeans, method = "wss")

The second technique will tell you the answer, identifying a peak “silhouette width” with a handy dashed line.

fviz_nbclust(G[, 1:2], kmeans, method = "silhouette")

The third shows a “gap” statistic, with the highest peak identified.

gap_stat <- clusGap(G[, 1:2], FUN = kmeans, nstart = 25, K.max = 10, B = 50) fviz_gap_stat(gap_stat)

As you can see, all three cluster identification techniques show that the ideal number of clusters is 2. Which makes sense because that is the number we initially generated.

Here I show you what the difference between the real cluster and the estimate cluster looks like, beginning with the real cluster.

G %>% ggplot(aes(x = X, y = Y)) +
geom_point(aes(colour = Name), show.legend = TRUE) +
scale_color_manual(values=c(GOLD0A,BLUE0A)) +
xlab(paste("Overlap percent = ",percent(as.numeric(p[1])), " : Overlap range = ", percent(p2),sep="")) + ylab("") + coord_equal(ratio=1)

Followed by the k-means cluster.

set.seed(20)
binaryCluster <- kmeans(G[, 1:2], 2, nstart = 10, algorithm="Lloyd") binaryCluster$cluster <- as.factor(binaryCluster$cluster) binaryCluster$color[binaryCluster$cluster == 1] = GOLD0A binaryCluster$color[binaryCluster$cluster == 2] = BLUE0A G$col2 = binaryCluster$color G %>% ggplot(aes(x = X, y = Y)) +
geom_point(aes(color = col2), show.legend = TRUE) +
scale_color_manual(values=c(GOLD0A,BLUE0A)) +
xlab("Unsupervised binary separation") + ylab("") + coord_equal(ratio=1)

Notice that the unsupervised clusering will mis-categorize some items in the cluster, but gets most of them correct.

Here we generate a binary separated by 4 standard deviations.

points = 10000
sd1 = 1
sd2 = 1
mu1 = 0
mu2 = 4
p=integrate(min.f1f2, -Inf, Inf, mu1=mu1, mu2=mu2, sd1=sd1, sd2=sd2)
G1 <- tibble(X = rnorm(points, mean = mu1, sd = sd1), Y = rnorm(points, mean = 0, sd = sd1), Name="Group 1", col = GOLD0A,Shape=1)

G2 <- tibble(X = rnorm(points, mean = mu2, sd = sd2), Y = rnorm(points, mean = 0, sd = sd2), Name="Group 2", col = BLUE0A,Shape=2)

G <- bind_rows(G1,G2) p2 = length(G$X[G$Name=="Group 1" & G$X> min(G$X[G$Name=="Group 2"])])/points
p2 = p2 + length(G$X[G$Name=="Group 2" & G$X< max(G$X[G$Name=="Group 1"])])/points
p2 = p2/2

Notice that even with 4 standard deviations separating the groups, the elbow technique still clearly diagnoses 2 clusters – a binary system.

fviz_nbclust(G[, 1:2], kmeans, method = "wss")

fviz_nbclust(G[, 1:2], kmeans, method = "silhouette")

gap_stat <- clusGap(G[, 1:2], FUN = kmeans, nstart = 10, K.max = 10, B = 50) fviz_gap_stat(gap_stat)

And here is the underlying cluster with overlapped entries.

G %>% ggplot(aes(x = X, y = Y)) +
geom_point(aes(colour = Name), show.legend = TRUE) +
scale_color_manual(values=c(GOLD0A,BLUE0A)) +
xlab(paste("Overlap percent = ",percent(as.numeric(p[1])),
" : Overlap range = ",percent(p2),sep="")) + ylab("") + coord_equal(ratio=1)

Notice that the cluster analysis misidentifies many entries – about 5% of them.

set.seed(20)
binaryCluster <- kmeans(G[, 1:2], 2, nstart = 10, algorithm="Lloyd") binaryCluster$cluster <- as.factor(binaryCluster$cluster) binaryCluster$color[binaryCluster$cluster == 1] = GOLD0A binaryCluster$color[binaryCluster$cluster == 2] = BLUE0A G$col2 = binaryCluster$color G %>% ggplot(aes(x = X, y = Y)) +
geom_point(aes(color = col2), show.legend = TRUE) +
scale_color_manual(values=c(GOLD0A,BLUE0A)) +
xlab("Unsupervised binary separation") + ylab("") + coord_equal(ratio=1)

Lastly, here is a binary that is only separated by 2 standard deviations. A barely noticeable binary.

points = 10000
sd1 = 1
sd2 = 1
mu1 = 0
mu2 = 2
p=integrate(min.f1f2, -Inf, Inf, mu1=mu1, mu2=mu2, sd1=sd1, sd2=sd2)
G1 <- tibble(X = rnorm(points, mean = mu1, sd = sd1), Y = rnorm(points, mean = 0, sd = sd1), Name="Group 1", col = GOLD0A,Shape=1)

G2 <- tibble(X = rnorm(points, mean = mu2, sd = sd2), Y = rnorm(points, mean = 0, sd = sd2), Name="Group 2", col = BLUE0A,Shape=2)

G <- bind_rows(G1,G2) p2 = length(G$X[G$Name=="Group 1" & G$X> min(G$X[G$Name=="Group 2"])])/points
p2 = p2 + length(G$X[G$Name=="Group 2" & G$X< max(G$X[G$Name=="Group 1"])])/points
p2 = p2/2

Notice that even with 2 standard deviations separating the groups, the elbow technique DOES diagnose that this is a binary system, but barely. The silhouette and gap techniques also point to a binary.

library(factoextra)
fviz_nbclust(G[, 1:2], kmeans, method = "wss")

fviz_nbclust(G[, 1:2], kmeans, method = "silhouette")

gap_stat <- clusGap(G[, 1:2], FUN = kmeans, nstart = 10, K.max = 10, B = 50) fviz_gap_stat(gap_stat)

Here you can see the underlying binary division.

G %>% ggplot(aes(x = X, y = Y)) + geom_point(aes(colour = Name), show.legend = TRUE) + scale_color_manual(values=c(GOLD0A,BLUE0A)) + xlab(paste("Overlap percent = ",percent(as.numeric(p[1]))," : Overlap range = ",percent(p2),sep="")) + ylab("") + coord_equal(ratio=1)

And as you would expect, oh boy does the k-means clustering make mistakes.

set.seed(20)
binaryCluster <- kmeans(G[, 1:2], 2, nstart = 10, algorithm="Lloyd") binaryCluster$cluster <- as.factor(binaryCluster$cluster) binaryCluster$color[binaryCluster$cluster == 1] = GOLD0A binaryCluster$color[binaryCluster$cluster == 2] = BLUE0A G$col2 = binaryCluster$color G %>% ggplot(aes(x = X, y = Y)) +
geom_point(aes(color = col2), show.legend = TRUE) +
scale_color_manual(values=c(GOLD0A,BLUE0A)) +
xlab("Unsupervised binary separation") +
ylab("") + coord_equal(ratio=1)

However, K-means clustering can still uncover the binary.

References:

Weitzman, M. S. (1970). Measures of overlap of income distributions of white and Negro families in the United States. Washington: U.S. Bureau of the Census.

https://afit-r.github.io/kmeans_clustering

https://rpubs.com/williamsurles/310847

University of Canterbury Open Day

The University of Canterbury held this year’s Open Day on Thursday, July 11, 2019. It was a chance for high-school students to look at possible majors at our University. This year I had the chance to showcase UC Linguistics, and I brought along our ultrasound machine to show people images of my tongue in motion, and let them see their tongues. A few were intimidated by the idea of seeing their own tongues on a machine, but lots of young students participated, and hopefully got a bit of a taste for Linguistics and especially phonetic research.

However, next year I will try to build more materials to address all the ways linguistics can be useful to students. I like the fact that Linguistics is both arts and science at the same time. You learn to write, you learn numeracy, you learn statistics, and you learn how to do experiments. And on top of that, our students learn how to speak in public and speak well. These are exceedingly useful skills, and have led students to continue in research, get positions with Stats NZ, build up computer research in local companies, and so much more.

Tutorial 2: Overlapping binaries.

Having previously demonstrated what two binary groupings look like when they are separated by six standard deviations, here I demonstrate what they look like when separated by 4 standard deviations. Such a binary has an overlapping coefficient of 4.55%, as seen from the code below, which computes from integration based on Weitzman’s overlapping distribution.

## 0.04550026 with absolute error < 3.8e-05
## [1] "4.55%"

This is what such data looks like graphed in a density curve.

The overlap range is now much larger, as can be seen in the scatterplot below.

Now let’s look at an overlap range of 2 standard deviations.

## 0.3173105 with absolute error < 4.7e-05
## [1] "31.73%"

The density plot now overlaps a lot.

And this is what the scatterplot looks like.

Now look at the scatterplot without color differences. At this point there is the barest of hints that there might be a binary in this system at all.

Let us compare that to the initial binary, separated by 6 standard deviations, now in grey.

This image has an empty alt attribute; its file name is image-22-1024x731.png

With this data, the binary remains visible and obvious even when both samples are gray.

However, even if you cannot observe categories by directly looking, there are tools that can help identify N-nary categories in what looks to us like gradient data – the tools of unsupervised cluster analysis, which I will discuss in the next tutorial.

The RMarkdown file used to generate this post can be found here. Some of the code was modified from code on this site.

References:

Weitzman, M. S. (1970). Measures of overlap of income distributions of white and Negro families in the United States. Washington: U.S. Bureau of the Census.

Tutorial 1: Gradient effects within binary systems

This post provides a visual example of gradient behaviour within a univariate binary system.

Here I demonstrate what two binary groupings look like when each binary is separated on a non-dimensional scale of 1 standard deviation for each binary, with a separation of 6 standard deviations. Such a binary has an overlapping coefficient of 0.27%, as seen from the code below, which was computed from integration based on Weitzman’s overlapping distribution.

## [1] "0.27%"

But the overlapping range hides the fact that in a group of, say, 10,000 for each binary, the outlier overlap is often enormous, and sometimes individual tokens look like they belong firmly in the other binary choice – like the one blue dot in the gold cloud. (Note that the y-axis is added to make the display easier to understand, but provides none of the data used in this analysis.)

In short, in a binary systems, individual tokens that exist thoroughly within the other binary range will exist due to simple random variation, yet they do not present evidence of constant gradient overlap or against the existence of the binary. Such things occur as long as the two binaries are close enough in relation to the number of examples – close enough being determined by simple probability, even in a univariate system (one without outside influences.)

The RMarkdown file used to generate this post can be found here. Some of the code was modified from code on this site.

References:

Weitzman, M. S. (1970). Measures of overlap of income distributions of white and Negro families in the United States. Washington: U.S. Bureau of the Census.