An Alternative Way of Seeing Data Monetization

From early-stage payments fintechs to giant acquirers, every company is asking themselves the same question: “How can we turn our data into dollars?”

After all, most companies these days are to some extent data companies, whether they are aware of it or not. Many businesses try to leverage certain types of data they capture, but there’s also a lot of valuable ‘data exhaust’ they could use without ever sharing any personal or sensitive information. This is known as alternative data and it is being rapidly monetized and shared in the US and Europe.

What is data exhaust?

No, it doesn’t refer to the exhausting nature of big data. (Though there is something to be said about that too!)

Data exhaust refers to the excess data that is generated as a byproduct of a company’s operations. Simply put, it’s all the data the firm might not know what to do with, or might not think is relevant to its core business. This amount is much bigger than you think – Forrester reported that on average, between 60% to 73% of all data within an enterprise goes unused.

However, with advances in IoT, machine learning and artificial intelligence, this rapidly growing volume of exhaust could hold much untapped potential. In fact, this data exhaust could end up being converted into valuable fuel, whether for better decision-making or new ancillary revenue.

Why is data monetization so hard?

Firstly, many firms struggle with what data monetization actually means. Some paths to data monetization are more obvious than others. We’re living in an era when exploiting data for advertising or marketing purposes has become a huge concern. Even when there is no threat to personal privacy, organizations still have to navigate reputational risks if there is even a whiff of data misuse.

Secondly, trying to glean insights from all this raw and unstructured data can be like finding a needle in the haystack. It’s a significant challenge in terms of resources and infrastructure, requiring data expertise that is usually not found in-house.

So what can companies do to tackle this?

Two routes to monetization

These are the two primary paths to data monetization that companies can choose to take, though they are not mutually exclusive. In fact, both paths can intersect and one can lead you down the other:

1) Getting new business insights – This is an internally focused path that may not directly lead to money on the table. But it’s about leveraging data to improve operations or the customer experience. In turn, this could lead to higher profitability or greater efficiencies that result in reduced costs.

Alternative data can yield insights that we may have otherwise not considered. But it’s easier said than done because, as Forbes reports, 87% of executives are still not confident they’re able to leverage all customer data.

But first, every organization needs to take stock of its data assets and figure out which types of data potentially hold value. Then they need to assess whether they have the data management infrastructure, tools and resources to be able to extract value from it.

2) “Externally” monetize data – These days, the mere mention of “selling data” conjures negative reactions. But there are ways of monetizing non-personal data that is aggregated and anonymized. This can be valuable to people you may not be thinking of in ways you might not have imagined.

Opportunities may exist in markets that are new and unfamiliar to the data owner. For instance, firms can open up new revenue streams by selling their data to economists, analysts, investors and any other parties that are seeking to gain new and unique insights.

Raw data by itself can be one-dimensional. It is when data from different companies and sectors is combined and enriched with complementary data sets that real value is created. For instance, a company working with vendors across the country might have data on national beverage sales. It could track these sales and provide additional insights back to the vendors to help them improve sales and promotions. The company could also share this data with beverage brands so they can finetune and optimize marketing by city.

Think about it this way: Doing nothing with your data is the equivalent of keeping all your savings under the mattress. It seems like a safe bet, but it’s outdated and you get zero returns. Data monetization is a smarter investment – it seems daunting at first but if you can find a safe, meaningful use case, your company’s data becomes a revenue driver rather than a sleeping asset. 


Port congestion is vital in shipping industry and the dependent businesses. If more ships are anchored waiting, the expected costs of shipping goes up. Felixstowe in the United Kingdom is one of the most efficient ports in the world. However its efficiency has slightly trembled during the last months of 2018. In November, the waiting time for ships at Felixstowe spiked above 1 hour and 15 minutes, signalling increased unease at one of Britain’s most important ports.

Felixstowe is one of the most efficient ports in Europe. It serves three thousand ships each year to unload their containers which will then be distributed around England through its rail and road links connecting the port with distribution hubs. Alternative data from Marine Traffic shows that efficiency of Felixstowe has slightly decreased since Brexit was announced.  Is Brexit already happening for the biggest port in the UK?

Port congestion is the term commonly used to describe the situation where vessels have to queue up outside a port and are waiting for a spot so they can load or offload their containers. The more boats are on queue to enter the port, the higher the congestion rate.

Congestion can be filtered through different ways: Number of vessels waiting to anchor informs us on the weekly intensity of ships that have to wait. This can be filtered down to the number of calls, which denotes the number of ships that call to be informed about the remaining waiting time. Weekly median time in anchorage or at port show central tendencies of waiting time measured in hours.

The best measurement with respect to efficiency is the standard deviation of port or anchoring time.  The larger the standard deviation the higher the risk that a ship can wait longer at a port. Higher deviations on waiting time lower the efficiency and increase uncertainty. A higher deviation of anchoring time would ultimately suggest a higher and more risky port congestion.

On the other hand, The port of Felixstowe is Britain’s biggest and busiest container port, and one of the largest in Europe. More than seventy percent of containers coming through Felixstowe are delivered to what is known as the ‘Golden Triangle’, the busiest economical region in the middle of England.

On top of that, it is one of the most efficient ports in the world with one of the lowest standard deviation of anchoring time. It is startling to notice that the peaks of standard deviations are increasing at this uncertain time for the United Kingdom.

The figure below shows the median number of calls in some of the main ports around the world during the four quarters of 2018. The main advantage of looking at the median instead of the mean, is that the median is not skewed so much by extremely large or small values, and so it’s safe to assume that in this instance, it provides the most accurate value as it divides the sample in two parts.

Felixstowe has a median anchoring time of around 45 minutes which is far better than other ports. During the third quarter of 2018, the port was also the busiest and the median waiting time was over one hour.

The figure below shows the median number of calls per port. As noted, Felixstowe appears to have the second  smallest amount of calls after Kharg. This is an impressive score given the size of the port and the number of ships it anchors each year.

However, when it comes to risk of anchoring time, Felixstowe has been showing some unusual volatility after Brexit was announced.

At no time before, the standard deviation  of median anchoring time was above two hours. In November 2018 this deviation reached above two hours and 15 minutes increasing uncertainty and showing sings of weak efficiency for Felixstowe.

Why might have this happened?

Data viewpoint: During the first and the last week of November two of the highest extremes were noticed in the standard deviation of anchoring time of Felixstowe. A period of frequent extremes can signal that the port cannot predict short run congestion. This indicates an increased unease at one of Britain’s most important port.

A summary table for the year 2018 on the median anchoring time, the average number of calls and vessels and the standard deviation of the median anchoring time is given below.

Download the full report: Is Brexit already happening for Felixstowe?


On the list of cities with most expensive housing, London is quite high up. However, the city has a history of rising prices and an expected bubble that never exploded. During the past ten years, the housing market has experienced the most disrupted property cycle on record. In the midst of Brexit uncertainty, house prices in London dropped for a short bit after only to recover a few months later.

The house price growth has decreased and the number of houses sold in London has plunged. The housing market in London is more sensitive to the Brexit Saga than the English housing market.  

Uncertainty and fear after UK voted to leave the EU have stagnated house prices across the United Kingdom. The majority of news articles in the UK is concentrated around Brexit and the market sensitivity to the political clutter is now higher than ever.

Back in early 2016, England’s housing market flourished, when the average house price was £220,361, or approximately 9% higher than early 2015 and 0.1% higher than late 2015.

In June the UK voted to leave the EU and in July 2016 the annual and the monthly house price growth started its descent. Since then, the number of transactions fell as annual price growth slowed to less than 1%. Fears over Brexit caused English house prices to all but stagnate in 2018, rising by the smallest amount in almost six years.


According to  the Financial Times, the Bank of England has claimed that London house price drop is unlikely to create a Domino effect on the rest of the country. Brexit related uncertainty is higher in the capital than it is in the rest of the country, causing a sharper drop.

With Brexit ,net migration from EU is expected to fall which in turn pushes demand for houses down which pull prices further down. Other related reasons to the slowdown of price growth are related to rising interest rates and the disproportionately increase in the rate of stamp duty, which makes buy-to-let investments less attractive.

However, what about house prices within London? Have all areas within London experienced the same price change? How do these compare with the rest of England?

Data and Insights

The figure below shows the average monthly price volatility of house prices in England, London and two neighborhoods of London, namely: Camden and Kensington & Chelsea.

image (1).png

In general, the UK housing prices have not been as volatile as those of London’s boroughs. In Kensington & Chelsea, right before the vote the average house prices went up to an all time high at the time. In July 2016, Brexit played a role as house prices went down and so did the demand the coming months.


If on average an average house in Camden becomes almost £3000 more expensive each month, after Brexit this monthly increase went down to £2500. The story of Kensington & Chelsea is even more dramatic. An average house increases with £4500 and this increase went down to £2000.

What about the number of houses being bought?

image (1).png

On the left hand-side the axis, the number of houses sold in England is shown, while London is on the right side. The remarking insight from the figure above is the almost hand-to-hand movement of houses sold in the England with those in London. The movements of the two correlate highly along time (r=0.9).

The figure above illustrates a sudden increase in the house prices of both London and UK in the beginning of 2016 which can explain the market flourishing during this period. This could have been driven also by uncertainty on what would happen after the vote which triggered fast buying. And then Brexit happened, and number of houses sold dropped for both London and England.

What confirms the predicted dynamics from the Bank of England, is the part after: London is showing a substantial decrease in the number of houses sold and so is England, but the drop in London was much harsher.

On the 15th of January 2019, House of Commons voted down the Brexit deal proposed from the government and uncertainty over the economic outlook will appear to be more dominant  in London than it is in England.image (2).png

More specifically within some areas of London, namely Camden and Kensington & Chelsea, the number of houses sold has gone down since June 2016 and if it keeps with the same trend, potential home buyers will not be lucky.

Download the full report: Stag-Brexit