2. Engagement Rate

The engagement rate is a way that social media analysts cross-compare profiles to measure the relative efficiency and performance across social profiles. Social media analysts and marketers are very interested in how comparable content performs across profiles. In this series, we will discuss why the engagement rate is incorrectly calculated, what effects that leads to, and how this can be resolved.

1.2.1 What does the engagement rate attempt to answer?

The goal of the engagement rate comes from the necessity to create cross-section comparisons relative to audience size. Comparison of profiles are valuable for the benchmarking of interested parties.

If a particular profile puts out a post, and you want to compare the performance of this post against another, there is no point of reference. We must consider some reference value to create a relation.

1.2.2 How is the engagement rate calculated?


The Engagement Rate comes from the necessity to make cross-sectional comparisons relative to audience size. If we merely take the number of engagements of a profile, then we have no point of reference as to the potential engagement of this piece of content. A more appropriate measure from an internal social media analyst would be to measure relative to the reach, however the same mathematics holds. There are a couple of methods by which the Engagement Rate is measured for a Business, Public Figure and Organization.

# Of Interactions / Audience Size. The number of interactions for a particular post is calculated as the sum of the likes, the shares and the comments of the profile.

PTA / Audience. The PTA is the People Talking About metric provided by Facebook. Some argue there are slight technical glitches that make it useless. We disagree and feel it is an appropriate proxy metric as far as a general measure of overall volume on a profile. It is a moving average of engagement, profile growth and interactions. While there will occasionally be an outlier due to a data reset, this is a reliable proxy method for the level of interaction.

# interactions / PTA. This is a slightly less used statistic, and an interesting idea, but the interactions are embedded in the PTA, and therefore there’s autocorrelation between the variables.

Unfortunately all of these methods are invalid. They are on the right track, however, they all implicitly rely on some underlying assumptions. The major assumption is that the data is normally distributed. This is in fact not the case. If the data isn’t normally distributed, then even something as fundamental as taking an average becomes irrelevant.

The fundamental of the engagement rate is undermined by the fact that neither the distribution of the number of likes or the number of engagements is normal, binomial distributions. Instead they obey a power law. This means that simple normal statistics are not an appropriate way to handle the data. Non-linear methods must be used, even in the case of something as simple as the engagement rate. This is a more network-like approach, but it’s an important distinction. The concept of an average, of the central limit theorem, of standard deviations, simple confidence intervals are thrown out the window. When trying to control for audience size, using the engagement rate is fundamentally flawed and simply invalid.

Assume that two companies, makes a post on Facebook. One company gets a million likes, another gets a thousand likes. So you would say that the company that gets a million likes on their post is more engaging. But what if we consider that the first company had 10 million profile likes, and the second company only had 1000 profile likes? Certainly we would have to reconsider the relative scale of the profiles to be able to compare them cross-sectionally. Enter the engagement rate.

So we say that the profile with 10 million fans had 1 million likes on the post. Therefore 10% of their fans interacted with the post, so they have a 10% rate of engagement. The other profile, with 1000 profile fan and 1000 post likes had a 100% engagement rate. Which one is more engaging? Is it right to penalize the profile with a million likes simply because they have more profile fans?

It’s easy to see the limitations of the engagement rate. Unless the two profiles have similar audience sizes, inference is basically arbitrary. So one problem is that this is the fuzzy math that is being pedaled as a real social science. It’s one thing to point to their dubious math, but it’s another to show how fundamentally off they are.

When we divide one thing into another, there are some fundamental assumptions we are making. One of them is that the two datasets are normally distributed. That is to say, if we were to make a histogram of all the potential combinations for each column of data independently, they would each look like this:

When the data is normally distributed, it means among other things that the average is the average as we know it (a certainty we will see that shouldn’t be taken for granted), that the distribution of data trails off on each of the ends. We can think of this as: some profiles have lots of links, some profiles have very few links, and the general profile has an average number of links. This may make sense in everyday life, as the actual human to human contacts obey such properties of similar averages. However when we had Brands, Public Figures and Organizations into the mix, entities that millions of people know without necessarily being known back, the way people are, the distribution of interaction becomes something different.

So we need to start from a blank slate, to approach the problem without prior assumptions, to not restrict this particular data to a shape: to approach it without a parameter.

Assume our task is to create a proper benchmark measure for social engagement. Numerous social media analytic consultants and applications do it. Let’s work through their methods to verify if this the proper approach to the problem. Since we have two variables the number of likes and the talking about count, we can create a simple linear regression. One of the primarily assumptions of the Ordinary Least Squares (OLS) is the Best Linear Unbiased Estimator (BLUE). This means among other things, that the data is normally distributed, that errors are uncorrelated with the variables, and that the model itself is linear in coefficients. This allows us to use linear regression to estimate the trends for a particular profile. When creating a linear regression, we will regress the talking about count, the dependent variable, on the number of likes, the independent variable.

At the core of the argument is that real-world social networks are heterogeneously distributed, and that this distribution requires a different set of mathematical tools to properly analyze and understand the network. Real-world complex systems involve many interacting simple parts which makes a larger unique property of the system. This approach has become dominant in many fields such as physical, biological and social systems. Unfortunately, despite social media’s insistence on the virality and the potential of exponential amplification and spread, the mathematical toolkit necessary for its proper analysis is missing.

First we need to understand what a complex network is and how social network is complex.

Complex network analysis is a relatively new phenomenon. Though they depend on work established two centuries prior, it is only in the 1980s and 1990s when chaotic systems dynamics led physicist toward mapping real-world systems. The small-world phenomenon was expounded in the mid-1990s by Stroghatz and Watts. Scale-free networks were discovered only fifteen years ago in the mapping of the World Wide Web by the Barabasi and Faloustous workgroups.

Complex network growth deals with handling probabilities of new nodes. This is the core difference between a random network and a scale-free network. In a random network, new nodes are added uniformly, with no respect to the degree of the existing nodes. In complex networks with nontrivial topologies, nodes are not necessarily added randomly or uniformly. In fact in scale-free networks there is a particular equation, Barabasi-Albert Preferential Attachment, which suggests a universality to the addition of new nodes. Real-world complex networks involve multi-level processes which prevent actors from realistically and actively obliging this equation. This is at odds with actors within networks unconscious of their larger structure and therefore suggests a more fundamental underlying process. While these underlying processes do drive the development of complex networks, they are being played out by actors with no discenable particular motiviation: people, proteins, webpages.

And despite theindependent motivations of each actor, the network grows in a systematic and repeatable fashion. Social networks have probably done more to move complex networks towards the mainstream since the six degrees of separation phenomenon in the 1990s. Social networks have given the average person an understanding of what it means to be a node, what it means to have links, and what it means to participate in a larger network structure. The combination of network science and big data has paired to produce markets devoted to analyzing personal social networks traits and mathematics. In aggregate, the modeling of these networks have given deep insight into the dynamics in interpersonal relations and methods with communicating with existing brands, public figures and organizations.

With the rise of social network usage, those purporting to be experts with social media analytics have suggested methods of benchmarking. We have been providing such consulting based on the research contained in this book since 2007. We are using this opportunity to introduce nontrivial complex networks analytics into the social realm.

The major fault of social media analytics, with respect to its ignorance of developments in complex systems science, is the negligence in understanding the underlying network distribution. The non-normal distribution in social engagement means that normalization is necessary to obtain relevant insights from the underlying data. Whether this normalization be of internalization, semi-parametric, or non-parametric methods depends on the familiarity of the data scientist, but at a minimum, the acceptance that normal statistics fundamentally cannot describe the phenomenon. This means that any cross-sectional analysis, that means the comparing rates of engagement between two profiles, cannot be measured by normal averages to understand a particular ratio of two profiles. Because this metric analysis takes place at many different scales, that is from small businesses with ten dedicated users, to multinational corporations with tens of millions of followers, it means that the phenomenon underlying them are both similar and different. This is where complexity science enters, as the underlying processes of scale-free networks.

It did not take long after the introduction of these statistics to relegate them to teh status of ‘vanity’ metrics. But why are they vanity metrics? In fact the reason that they are being discarded is precisely because the wrong mathematics are being used to tackle the problem. In fact, the mathematics of how to tackle these issues were discovered fifteen years ago, in the mapping of the World Wide Web.

Resurrecting the engagement rate to make. Perhaps the Myth of the Engagement Rate is a misnomer, instead we will vindicate this statistic in light of these new, complex mathematics. And what it means to be a part of a network.

We review the usage of scale-free networks, this time with the real-world application of social media analysis. Understanding the heterogeneity underlying the growth processes of social networks, makes those who benchmark social networks obtain relevant and meaningful insights into their data. The question we might want to know from the economics of celebrity is: does this hold when we do the same thing on the internet? The distributions of webpages, the distribution of interaction between people and entities.

Networks are defined as a complex mapping between entities. Network science is rooted in graph theory and deals with the interaction mapping as a function of itself. Networks are interested in the mathematics or behaviors of certain entities and the way they interact with each other. These interactions take place on various scles: in the physics realm they involve the percolation of atoms and molecules; in biology, network scientists are concerned with the complex interactions of lifeforms within our bodies, in our cells, within our proteins, and the eco- system at large; economists are inerested in the autonomous transfer of money in self- organized human systems and the underlying behavior decisions that drive such processes; in the social sciences we are interested in understanding the dynamics in terms of the interactions between people; in social networks we are interested in how people interact, how they share, how they collaborate and how they make decisions based on these network principles.

If we were to measure these decisions on the wrong topology, then we would create irrelevant insights which lead to no possible mathematical interpretation. It is fundamentally understanding the static and dynamic topologies of a social network and its ensembles that we can understand the forces that drive the mathematics we are seeing.

We can measure the degree distribution by counting the number of the links each node has, and then counting the frequency of those values. We create a histogram of the frequency of particular values. If we plot this histogram, we can make underlying assumptions of the properties of the network it is trying to describe. The assumptions that we make as a result of this histogram, implicitly, drives what will become of our results. This means that if you have improper data at first, and even if you create perfect analysis of this data, that the underlying result is biased or invalid fundamentally, because the underlying distribution does not match the assumptions of the typical normal-data statistics. This is consequential because it makes even something as simple as an average become an invalid measure. Even something as simple and basic as an average becomes fundamentally wrong and therefore cannot provide the insight that analysts are trying to interpret.

Paper title “On the Topology of the Facebook Page Network”, the distribution of audience size and the distribution of engagement are both power laws. They both require a logarithmic transformation to display the underlying dynamics available. This means that the engagement rate is fundamentally incapable of describing the phenomenon it is trying to explain.

Something as simple as a logarithmic transform of both variables provide such an improvement of these fundamental engagements, that anyone peddling the current engagement rate should be looked upon skeptically. The differences in the error of these measurements are so astronomical due to the presence of heteroskedacisticy, which is largely and immediately corrected by a logarithmic transformation. More nonlinear exact methods can be used, as well as the ‘scale-free-ness’ of each particular media network, but what is most important is understanding that the mathematics commonly associated with these benchmarks are fundamentally invalid, and that there exist simple mathematical correctives which can greatly improve insight. Those with nonlinear and logarithmic considerations will quickly be leaders in the field, as their methods of analysis will provide more fruitful when handling such complex topologies.

Because engagement follows a power law, it’s extremely important to use the type of mathematics which allow you to properly analyze the underlying dynamics. This should be obvious, however nearly a decade after early social media analysis and these mathematical errors not only seem to be more prevalent, but in fact accelerating towards its hyper usage. Those in the boardrooms are being swayed by analysts without any proper understanding of the underlying complex phenomenon, while simultaneously evangelizing the effects of complex topologies: for example a social ‘guru’ who claims to preach benchmarking virality, in fact has little idea of the mathematical definitions of virality, how to measure it on various network topologies, and how to integrate this into a proper management and analysis package. More of the same wrong math is still wrong math.

There is a reason that those interacting wtih social media analysts are skeptical of such vanity metrics: because it’s true. It is almost commonplace now to understand that social media gurus are not that good, that they’re unsure of what’s going on, and incapable in many ways to truly describe the phenomenon.

In understanding why these measurements not only come up short, but are completely irrelevant, we can understanding a little about the fundamentals of network theory and its’ development. Network science traces its origins to the 18th century, and Euler’s seven bridges of Konengsberg problem. We will trace these origins through to today and show how graph theory and topological considerations are necessary for the measurement of social media analysis.

Complex systems are defined as those whose small simple parts produce incredibly rich, interesting topologies. The irreducible result of these complex interactions is known as an emergent property of the system. Complex dynamics of simple properties are commonly dissected using algorithms to reproduce approximation of the underlying mathematical relations; typically this is done to achieve nonlinear results from ordinary differential equations, or simple algorithms which can model some part of physical, biological and social behaviour.

But the heterogeneous power-law and scale-free properties of the social engagement network drives heteroskedasticity in the linear model, which leads to biases and inconsistencies. The scale-invariant logarithmic properties of scale-free networks allow us to use a logarithmic regression to correct for this violation of the fourth Gauss-Markov assumption. We show that understanding the non-linear properties is crucial to proper hypothesis testing involving the comparison of engagements across profiles of different scales.

Another instance where Slattery, McHardy and Bariathi investigate the logarithmic nature in the paper is shown below.

In the case of interaction divided by audience size, the data is normally distributed:

If this graph looks alien to you, let’s crank through it really quick. Let’s first see that this the normal distribution, or the binominal distribution, or the random distribution, or the bell curve. So the distribution of a bell curve is considered normal because it behaves in a certain way. In this graphic, on the x-axis we have the distance from the mean, or 0. So we see a -1 and a 1, and between this range we see: 15.0, 19.1, 19.1, 15.0. This is within one standard deviation of the mean. If we add these up we get 68.2, which might sound familiar (and induce confidence- interval nightmares) because it is the standard deviation within one of the the mean. This means that 68.2% of our data can be found within this range. If we extend this from -2 to +2, two standard deviations, we also add: 9.2 + 9.2 + 4.4 + 4.4 we get 95.4. This means that 95.4% of our data is found within two standard deviations of the mean. What this implies is that both the audience size and the engagement is normally distributed. This means that the average Page has the average number of links, as if there is an average at all. This also means that on each tail there is a range which becomes less infrequent. There are some profiles with very high and some with very low, but most of the pages have an average number of Likes. If we extend this a little further and consider the cumulative distribution:

If we want to find the cumulative probability of a particular profile, then it would follow these dynamics. But is this necessarily the case? Slattery, McHardy and Bariathi investigate this in the paper. The way to do this is to get a sample of profiles and count up their number of likes. So for each time a number of likes occurs, we count up the frequency. In the normal distribution, we could say that 1 profile had 1 like, and 1 profile had 100 likes. And 50 profiles might have had 50 likes. Each end would taper off. But in fact this is not the distribution of the Page Network at all. As SMB show, the distribution actually has power law characteristics.

In the left image, we see the distribution of the Facebook Page Network. At first glance it might seem that it’s no different than the distribution we discussed. It tails off. The difference however is that this graph is actually at the logarithmic scale. The separations between points on the axis are represented as logarithms, and thus the relationship is actually non-linear. So it looks something unlike the normal distribution, and therefore takes different math to interpret. This particular distribution, one which is linear on a logarithmic plot is known as a power law. This distribution pops up in lots of real world examples: such as the size of cities, the frequencies of words, and the citations in scientific papers.

And here’s how we interpret it:

A common question for a social media analyst is: what is the expected value of a particular post interaction given a profile’s audience size. Simply: how many interactions “should” I expect if I am a profile of a particular size? The Engagement Rate as commonly calculated simply does not answer that question. Given that the distribution of Engagement Rate and Interaction are both most appropriately logarithmically displayed, we can use a simple linear econometric regression for a real benchmarking of the interaction. By obtaining a linear estimate for the Engagement Rate, we obtain a more appropriate measure for the estimation of what the Engagement Rate is attempting to achieve: a measure of expected interaction given a particular audience size. Lucky for us, there’s an econometric process that remarkably, we find the same distribution in particular network topologies:

We won’t say this way of measuring is engagement is ideal, but it is far more relevant to compare these changes internally rather than across profiles, as the data is typically closer and might not change relative to logarithmic scale.

So now we have to talk about why a logarithm just as the World Wide Web is scale-f this means that the difference between the profiles expand for social analytic, So there is an inherent advantage further we can then benchmark by sector.