Life on Li-Fi

7

October

2016

No ratings yet.

Wireless communication has been around for quite some time; from the earliest single-band radios to today’s multi-layered network capabilities of smartphones, which are capable of communication on a broad range of platforms such as Bluetooth, NFC, wifi (2,4 & 5 GHz) and multiple mobile radio networks (3G, 4G LTE, and the upcoming 5G). Who would however have thought that one of the earliest forms of wireless communication, naval signalling lamps, would inspire a potential follow-up to the micro-wave based forms of communication that are abundant nowadays?

This is exactly the aim of ‘Light Fidelity’, or LiFi, which was first introduced in a Ted talk by German researcher Harold Haas in 2011 (see link below) and is based on ‘Visible Light Communication’ (VLC). This concept is practically identical to naval signalling lamps, however its application has only recently become viable due to the ongoing development of Light Emitting Diodes (LEDs). These tiny lights are capable of flikkering far faster than the human eye can perceive (and is therefore invisible), a process which is based on the tongue-twisting mechanism of ‘subcarrier-index modulation orthogonal frequency division multiplexing’. The end-result is basically a laser-fast replacement for wifi: researchers have achieved communications speeds over 100x faster than regular wifi is currently capable of.

What are the merits, other than mind-boggling transfer speeds? Security is one. A window-less LiFi-equipped room would only allow devices physically present in the room to communicate on the network. Installing LiFi in every room in your house thus demarcates the user base, which makes the network more secure and prevents interference. This is particularly a problem on the popular 2,4 GHz wifi channel, which can be both be insecure and subject to interference.

Limiting interference also benefits the usability of LiFi, as conventional micro-wave based communication devices are a risk factor in places such as hospitals, nuclear power plants and other places where radio-waves are controlled (think of research labs for instance). Installing LiFi in these locations would not be a problem.

Further advantages include the use of LiFi on existing power networks in buildings: LiFi can be transmitted through power cables already present in most buildings and only requires minimal adjustments in order to convert existing networks. In addition, positioning data can also be light-based, as the functionality of GPS is greatly reduced indoors due signal loss. Think of smart lights in supermarkets directing customers to their desired products and LiFi-enabled airports providing both travel information and directions to visitors.

So is LiFi going to be follow-up of century-old radio-based communication methods? For wifi, it might, given that most places offering wifi are also equipped with a roof, keeping direct sunlight out of the equation. This is LiFi’s main weakness: the intensity of solar light completely negates the usability of LEDs, which in terms of brightness can only produce a tiny fraction compared to the nuclear fusion-induced powerhouse that lights and warms the entire earth. Even though LiFi is a lot more efficient compared to regular mobile radio antennas (it does not require continuous cooling for instance), it is only really usable at night. Experiments are however abundant: Chicago will be the first city to feature LiFi-enabled street lightning, allowing residents to access the internet at -wait for it- lightning speeds.

Sources:

https://www.ted.com/talks/harald_haas_wireless_data_from_every_light_bulb

http://www.economist.com/news/science-and-technology/21707515-lighting-fixtures-also-transmit-data-are-starting-appear-whole-new

http://purelifi.com/

http://www.techworld.com/big-data/what-is-li-fi-everything-you-need-know-3632764/

Please rate this

Technology of the Week – P2P Lending (Group 53)

6

October

2016

5/5 (2)

With interest rates steadily at an all-time low, consumers do not gain much in terms of returns when it comes to their savings nowadays. Even worse, some commercial banks have recently hinted at the prospect of setting negative rates, meaning that consumers have to pay in order to store their cash, on top of regular administration fees. Adjusted for inflation, these consumers are collectively becoming poorer over time.

On the other end are credit-seekers, for instance consumers that are indebted due to credit-card repayments, college tuition or unforeseen medical expenditures. The process of obtaining small personal loans through a commercial bank is more than often as complicated as it is likely to be rejected. Even though these consumers are willing to pay some level of interest in order to clear their immediate debts or undertake new purchases, they are (severely) constrained in their ability to do so.

Enter the platform-mediated network: an online community where lenders and borrowers connect. Credit-seekers fill out a loan request, which is listed on the platform for all potential funders to review. Investors construct their own personalised portfolios by (partially) funding requests that pass their judgement. The platform does the rest: loan contracts, interest rates, repayments, special clauses, administration and taxation are all taken care of. The credit-seeker gets a loan at a favourable rate, while investors are compensated by solid market-adjusted returns.

It is perhaps not surprising that this model has seen some relative succes in the past decade. The two earliest platforms in the United States, LendingClub and Prosper, have over the years serviced more than 26 billion USD in loans. The former went public in december 2014. The idea is catching on in other economies as well, most notably in China, where consumer propensity to save is notoriously high. Yirendai and Lufax, two of the largest Chinese platforms, went public on the NYSE as well.

What exactly is the appeal of these platforms? The explosive growth can be explained by a combination of 2 very potent factors:

1) Costs are low, for both sides on the platform. Lenders and borrowers alike benefit from what researchers have dubbed the ‘disintermediation of banking functions’, which essentially entails that expensive middlemen are removed from the process, letting consumers control (parts of) the process themselves. This consumer-at-the-wheel approach depends strongly on a solid community where frauds are called out by other users and investors work together to screen and monitor their own portfolios.

2) Network effects, brought about by informational transparency. The very nature of these platforms is embedded in their openness: once a user becomes a member of the community, a world of information is available. This has positive effects for both credit-seekers, who can review past requests and identify which characteristics make it attractive, and investors, who can learn from the same requests and find out how to assess the credibility of potential borrowers. Cross-effects are prevalent too: when the supply increases (i.e. more investors are looking to buy into loans), demand will follow suit. When demand follows, supply will increase in the same manner. This continuous process of two sides of the market encouraging development has led to near-exponential growth of the platforms. Watch our video to find out more about the functioning of these platforms, the main challenges and future prospects:

https://vimeo.com/184906719

Please rate this

The Perpetually Imminent ‘Information Meltdown’

20

September

2016

No ratings yet.

When, back in 1972, mr. Metcalfe was struck with an idea that would revolutionise networking technology, the possibilities seemed endless. The computers of the time (the really large ones like the Xerox Alto) did not support a unified way of transmitting data over a wired network. The main problem was interference: when multiple nodes in a network transmit messages at the same time, their contents are garbled. Metcalfe solved this issue by devising specific rules to network communication that are now known as Ethernet. In its essence, the rules prescribe that every node should receive a confirmation when a piece of information is transferred properly. In case a collision occurs, the node simply re-sends the information until the transfer succeeds. These pieces of information are called ‘packages’ and are still used in modern-day technology (check the network statistics in task manager (Windows) or activity monitor (Mac)).

Metcalfe did however anticipate certain limitations of his revolutionary technology that still challenge today’s internet: as more and more users (nodes) take part in the system, the number of potential connections increases dramatically, as does the volume of traffic. According to his predictions, the internet should have collapsed under the weight of traffic in 1996. He is not the only one predicting doom and gloom, multiple (authoritative) sources have at some point predicted that either the amount of information or transfer of said information would at some point exceed capacity, causing the entire system to grind to a halt.

The Economist forecasts information abundance in 2010 (left); Cisco forecasts traffic in 2016 (right).

Yet, here we are. The internet is as much alive as it ever was, a remarkable fact considering the near-exponential growth in the past decade (due to, for instance, cloud computing and the Internet of Things). More and more users are connected throughout the world and the total amount of information shared between them is expected to exceed one zettabyte this year, ushering in the ‘zettabyte era’ according to Cisco (a zettabyte is roughly one billion terabyte). So how does the internet cope?

One way is massive investment. Back in the 2000’s, when capacity problems started to surface, increasing availability and bandwidth became the focal points for companies maintaining the backbone of the internet. Transoceanic cables and off-shore solutions provided the required access and their number grew rapidly:

In addition, most developed economies are heavily supporting the development of fiber, which can be regarded to be the follow-up of old-fashioned copper cabling. Decreasing production and deployment costs allow for fiber connections to be implemented on a more granular level: from city blocks and industrial areas straight to (individual) consumers homes.

Datacenters are continually improving their services too. Nowadays, these enormous internet-warehouses focus on three main goals: increasing agility, availability and efficiency. Agile services and hardware allow operators to adapt quickly to new technologies and provide services to customers faster, for instance by continuously upgrading internal cabling, switches and routers (either of which can cause hardware bottlenecks in the future). Availability and redundancy mostly impact day-to-day operations. Because the demand from connected consumers is never-ending, designing and maintaining an efficient never-offline datacenter is challenging, yet does provide ample opportunities like modular designs, preconfigured hardware and clever management.

So for now, information availability is secure. The real challenge concerns how to find relevant and useful information within the ever-increasing bulk of data that the internet is able to provide straight to your preferred device.

 

Sources:

http://www.economist.com/node/15557443

http://www.economist.com/node/18895468

http://www.economist.com/node/12673221

http://www.economist.com/node/15048791

http://www.cisco.com/c/en/us/solutions/collateral/service-provider/visual-networking-index-vni/vni-hyperconnectivity-wp.html

http://tacdata.squarespace.com/home/2012/1/13/transoceanic-communications-cables.html

http://searchdatacenter.techtarget.com/feature/QA-How-data-centers-can-keep-up-with-massive-Internet-user-growth

Please rate this