Tuesday, 1 November 2016

Wireless: the next generation..

A new wave of mobile technology is on its way, and will bring drastic change

THE future is already arriving, it is just a question of knowing where to look. On Changshou Road in Shanghai, eagle eyes may spot an odd rectangular object on top of an office block: it is a collection of 128 miniature antennae. Pedestrians in Manhattan can catch a glimpse of apparatus that looks like a video camera on a stand, but jerks around and has a strange, hornlike protrusion where the lens should be. It blasts a narrow beam of radio waves at buildings so they can bounce their way to the receiver. The campus of the University of Surrey in Guildford, England, is dotted with 44 antennae, which form virtual wireless cells that follow a device around.
These antennae are vanguards of a new generation of wireless technologies. Although the previous batch, collectively called “fourth generation”, or 4G, is still being rolled out in many countries, the telecoms industry has already started working on the next, 5G. On February 12th AT&T, America’s second-largest mobile operator, said it would begin testing whether prototype 5G circuitry works indoors, following similar news in September from Verizon, the number one. South Korea wants to have a 5G network up and running when it hosts the Winter Olympics in 2018; Japan wants the same for the summer games in 2020. When the industry holds its annual jamboree, Mobile World Congress, in Barcelona this month, 5G will top the agenda.
Mobile telecoms have come a long way since Martin Cooper of Motorola (pictured), inventor of the DynaTAC, the first commercially available handset, demonstrated it in 1973. In the early 2000s, when 3G technology made web-browsing feasible on mobiles, operators splashed out more than $100 billion on radio-spectrum licences, only to find that the technology most had agreed to use was harder to implement than expected.
The advent of 5G is likely to bring another splurge of investment, just as orders for 4G equipment are peaking. The goal is to be able to offer users no less than the “perception of infinite capacity”, says Rahim Tafazolli, director of the 5G Innovation Centre at the University of Surrey. Rare will be the device that is not wirelessly connected, from self-driving cars and drones to the sensors, industrial machines and household appliances that together constitute the “internet of things” (IoT).
It is easy to dismiss all this as “a lot of hype”, in the words of Kester Mann of CCS Insight, a research firm. When it comes to 5G, much is still up in the air: not only which band of radio spectrum and which wireless technologies will be used, but what standards makers of network gear and handsets will have to comply with. Telecoms firms have reached consensus only on a set of rough “requirements”. The most important are connection speeds of up to 10 gigabits per second and response times (“latency”) of below 1 millisecond (see chart).
Yet the momentum is real. South Korea and Japan are front-runners in wired broadband, and Olympic games are an opportunity to show the world that they intend also to stay ahead in wireless, even if that may mean having to upgrade their 5G networks to comply with a global standard once it is agreed. AT&T and Verizon both invested early in 4G, and would like to lead again with 5G. The market for network equipment has peaked, as recent results from Ericsson and Nokia show, so the makers also need a new generation of products and new groups of customers.
On the demand side, too, pressure is mounting for better wireless infrastructure. The rapid growth in data traffic will continue for the foreseeable future, says Sundeep Rangan of NYU Wireless, a department of New York University. According to one estimate, networks need to be ready for a 1,000-fold increase in data volumes in the first half of the 2020s. And the radio spectrum used by 4G, which mostly sits below 3 gigahertz, is running out, and thus getting more expensive. An auction in America last year raked in $45 billion.
But the path to a 5G wireless paradise will not be smooth. It is not only the usual telecoms suspects who will want a say in this mother of all networks. Media companies will want priority to be given to generous bandwidth, so they can stream films with ever higher resolution. Most IoT firms will not need much bandwidth, but will want their sensors to run on one set of batteries for years—so they will want the 5G standard to put a premium on low power consumption. Online-gaming firms will worry about latency: players will complain if it is too high.
The most important set of new actors, however, are information-technology firms. The likes of Apple, IBM and Samsung have a big interest not only in selling more smartphones and other mobile devices, but also in IoT, which is tipped to generate the next big wave of revenues for them and other companies. Google, which already operates high-speed fibre-optic networks in several American cities and may be tempted to build a wireless one, has shown an interest in 5G. In 2014 it bought Alpental Technologies, a startup which was developing a cheap, high-speed communications service using extremely high radio frequencies, known as “millimetre wave” (mmWave), the spectrum bands above 3 gigahertz where most of 5G is expected to live.
To satisfy all these actors will not be easy, predicts Ulf Ewaldsson, Ericsson’s chief technology officer. Questions over spectrum may be the easiest to solve, in part because the World Radiocommunication Conference, established by international treaty, will settle them. Its last gathering, in November, failed to agree on the frequencies for 5G, but it is expected to do so when it next meets in 2019. It is likely to carve out space in the mmWave bands. Tests such as the one in Manhattan mentioned above, which are conducted by researchers from NYU Wireless, have shown that such bands can be used for 5G: although they are blocked even by thin obstacles, they can be made to bounce around them.
For the first time there will not be competing sets of technical rules, as was the case with 4G, when LTE, now the standard, was initially threatened by WiMax, which was bankrolled by Intel, a chipmaker. Nobody seems willing to play Intel’s role this time around. That said, 5G will be facing a strong competitor, especially indoors: smartphone users are increasingly using Wi-Fi connections for calls and texts as well as data. That means they have ever less need for a mobile connection, no matter how blazingly fast it may be.
Evolution or revolution?
Technology divides the industry in another way, says Stéphane Téral of IHS, a market-research firm. One camp, he says, wants 5G “to take an evolutionary path, use everything they have and make it better.” It includes many existing makers of wireless-network gear and some operators, which want to protect their existing investments and take one step at a time. On February 11th, for instance, Qualcomm, a chip-design firm, introduced the world’s first 4G chip set that allows for data-transmission speeds of up to 1 gigabit per second. It does the trick by using a technique called “carrier aggregation”, which means it can combine up to ten wireless data streams of 100 megabits per second.
The other camp, explains Mr Téral, favours a revolutionary approach: to jump straight to cutting-edge technology. This could mean, for instance, leaving behind the conventional cellular structure of mobile networks, in which a single antenna communicates with all the devices within its cell. Instead, one set of small antennae would send out concentrated radio beams to scan for devices, then a second set would take over as each device comes within reach. It could also mean analysing usage data to predict what kind of connectivity a wireless subscriber will need next and adapt the network accordingly—a technique that the 5G Innovation Centre at the University of Surrey wants to develop.
One of the most outspoken representatives of the revolutionary camp is China Mobile. For Chih-Lin I, its chief scientist, wireless networks, as currently designed, are no longer sustainable. Antennae are using ever more energy to push each extra megabit through the air. Her firm’s position, she says, is based on necessity: as the world’s biggest carrier, with 1.1m 4G base stations and 825m subscribers (more than all the European operators put together), problems with the current network architecture are exacerbated by the firm’s scale. Sceptics suspect there may be an “industrial agenda” at work, that favours Chinese equipment-makers and lowers the patent royalties these have to pay. The more different 5G is from 4G, the higher the chances that China can make its own intellectual property part of the standard.
Whatever the motivation, Ms I’s vision of how 5G networks will ultimately be designed is widely shared. They will not only be “super fast”, she says, but “green and soft”, meaning much less energy-hungry and entirely controlled by software. As with computer systems before them, much of a network’s specialised hardware, such as the processor units that sit alongside each cell tower, will become “virtualised”—that is, it will be replaced with software, making it far easier to reconfigure. Wireless networks will become a bit like computing in the online “cloud”, and in some senses will merge with it, using the same off-the-shelf hardware.
Discussions have already begun about how 5G would change the industry’s structure. One question is whether wireless access will become even more of a commodity, says Chetan Sharma, a telecoms consultant. According to his estimates, operators’ share of total industry revenues has already fallen below 50% in America, with the rest going to mobile services such as Facebook’s smartphone apps, which make money through ads.
The switch to 5G could help the operators reverse that decline by allowing them to do such things as market their own video content. But it is easier to imagine their decline accelerating, turning them into low-margin “dumb pipes”. If so, a further consolidation of an already highly concentrated industry may be inevitable: some countries may be left with just one provider of wireless infrastructure, just as they often have only one provider of water.
If the recent history of IT after the rise of cloud computing is any guide—with the likes of Dell, HP and IBM struggling to keep up—network-equipment makers will also get squeezed. Ericsson and Nokia already make nearly half of their sales by managing networks on behalf of operators. But 5G may finally bring about what has been long talked of, says Bengt Nordstrom of Northstream, another consulting firm: the convergence of the makers of computers and telecoms equipment, as standardisation and low margins force them together. Last year Ericsson formed partnerships first with HP and then with Cisco. Full mergers could follow at some point.
Big, ugly mobile-phone masts will also become harder to spot. Antennae will be more numerous, for sure, but will shrink. Besides the rectangular array that China Mobile is testing in Shanghai, it is also experimenting with smaller, subtler “tiles” that can be combined and, say, embedded into the lettering on the side of a building. In this sense, but few others, the future of mobile telecoms will be invisible. 

Wednesday, 19 October 2016

Touch Screen Technology – Definition, Working, Types & Applications..

Touch screen technology is the direct manipulation type gesture based technology. Direct manipulation is the ability to manipulate digital world inside a screen. A Touch screen is an electronic visual display capable of detecting and locating a touch over its display area. This is generally refers to touching the display of the device with a finger or hand. This technology most widely used in computers, user interactive machines, smart phones, tablets etc to replace most functions of the mouse and keyboard.
Touch screen technology has been around for a number of years but advanced touch screen technology has come on in leaps and bounds recently. Companies are including this technology into more of their products. The three most common touch screen technologies include resistive, capacitive and SAW (surface acoustic wave). Most of low end touch screen devices contain on a standard printed circuit plug-in board and are used on SPI protocol. The system has two parts, namely; hardware and software. The hardware architecture consists of a stand-alone embedded system using an  8-bit microcontroller,  several type of interface and driver circuits. The system software driver is developed using an interactive C programming language.

Types of Touch Screen Technology:

The Touch screen is a 2-dimensional sensing device made of 2 sheets of material separated by spacers. There are four main touch screen technologies: Resistive, Capacitive, Surface Acoustical wave (SAW) and infrared (IR).
Resistive:
Resistive touch screen is composed of a flexible top layer made of polythene and a rigid bottom layer made of glass separated by insulating dots, attached to a touch screen controller. Resistive touch screen panels are more affordable but offering only 75% of light monitor and the layer can be damaged by sharp objects. Resistive touch screen is further divided into 4-, 5-, 6-, 7-, 8- wired resistive touch screen. The construction design of all these modules is similar but there is a major distinction in each of its method to determine the coordinates of touch.
Capacitive:
A capacitive touch screen panel is coated with a material that stores electrical charges. The capacitive systems can transmit up to 90% of light from the monitor. It is divided into two categories. In Surface-capacitive technology only one side of the insulator is coated with a conducting layer.
Whenever a human finger touches the screen, conduction of electric charges occurs over the uncoated layer which results in the formation of dynamic capacitor. The controller then detects the position of touch by measuring the change in capacitance at the four corners of the screen.
In projected capacitive technology, the conductive layer (Indium Tin Oxide) is etched to form a grid of multiple horizontal and vertical electrodes. It involves sensing along both the X and Y axis using clearly etched ITO pattern. For increasing the accuracy of the system, the projective screen contains a sensor at every interaction of the row and column.
Infrared:
In infrared touch screen technology, an array of X and Y axis is fitted with pairs of IR Leds and photo detectors. Photo detectors will detect any image in the pattern of light emitted by the Leds whenever the user touches the screen.
Surface Acoustic wave:
The surface acoustic wave technology contains two transducers placed along X-axis and Y-axis of the monitor’s glass plate along with some reflectors. When the screen is touched, the waves are absorbed and a touch is detected at that point. These reflectors reflect all electrical signals sent from one transducer to another. This technology provides excellent through put and quality.

Components and working of touch screen:

operation when using the touch screen panel
operation when using the touch screen panel
A basic touch screen is having a touch sensor, a controller, and a software driver as three main components. The touch screen is needed to be combined with a display and a PC to make a touch screen system.
Touch sensor:
The sensor generally has an electrical current or signal going through it and touching the screen causes a change in the signal. This change is used to determine the location of the touch of the screen.
Controller:
A controller will be connected between touch sensor and PC. It takes information from sensor and translates it for understanding of PC. The controller determines what type of connection is needed.
Software driver:
It allows computer and touch screen to work together. It tells OS how to interact the touch event information that is sent from the controller.

Application – Remote control using Touch screen technology:

Controlling of vehicles and robots using touch screen based remote
Controlling of vehicles and robots using touch screen based remote
The touch screen is one of the simplest PC interfaces to use, for larger number of applications. A touch screen is useful for easily accessing the information by simply touching the display screen. The touch screen device system is useful in ranging from industrial process control to home automation.
Touch-screen-cir
Image Source – Edgefx Kits
In real time by simply touching the touch screen and with a graphical interface, everyone can monitor and control complex operations.
Touch-screen-circuit
Image Source – Edgefx Kits
At the transmission end using a touch screen control unit, some directions will send to the robot for movinginto a specific direction like forward, backward, rotating left and rotating right. At the receiving end four motors are interfaced with the microcontroller. Two of them will be used for Arm and grip movement of the robot and other two are used for body movement.
Some remote operations can be done with touch screen technology using wireless communication for answering calls, locating and communicating with staff, and operating vehicles and robots. For this purpose RF communication or infrared communication may be used.

A real time Application: Controlling home appliances using Touch Screen Technology

It is possible to control the electrical appliances at home using touch screen technology. The whole system works by sending input commands from the touch screen panel through the RF communication which are received at the receiver end and control the switching of loads.
At the transmitter end, a touch screen panel is interfaced to the Microcontroller through a touch screen connector. When an area on the panel is touched, the x and y coordinates of that area are sent to the Microcontroller which generates a binary code from the input.
This 4 bit binary data is given to the data pins of the H12E encoder which develops a serial output. This serial output is now sent using a RF module and an antenna.
At the receiver end, the RF module receives the coded serial data, demodulates it and this serial data is given to the H12D decoder. This decoder converts this serial data into the parallel data which pertains to the original data sent by the microcontroller at transmission end. The microcontroller at the receiver end receiver end, receives this data and accordingly sends a low logic signal to the corresponding optoisolator which in turn switches on the respective TRIAC to allow AC current to the load and the respective load is switched on.

Tuesday, 11 October 2016

The 4G Radiation Dangers!?30 Minutes of Exposure Affects Brain Activity..

The peer-reviewed journal Clinical Neurophysiology has just published research showing that 30 minutes of exposure to LTE cellphone radiation affects brain activity on both sides of the brain.

Researchers exposed the right ear of 18 participants to LTE radio frequency radiation for 30 minutes. The absorbed amount of radiation in the brain was well within international (ICNIRP) cell phone legal limits and the source of the radiation was kept 1 cm from the ear. 

To eliminate study biases the researchers employed a double blind, crossover, randomized design, exposing participants to real and sham exposures. 

The resting state brain activity of each participant was measured by magnetic resonance imaging (fMRI) twice, once after exposure to LTE radio frequency radiation, and then again after a sham exposure. 

The results demonstrate that radio frequency radiation from LTE 4G technology affects brain neural activity in both the closer brain region and in the remote region, including the left hemisphere of the brain. 




LTE Fastest Developing Mobile System Technology Ever

This study is important for two reasons. Firstly because it is the first one to be carried out on the short-term effects of Long Term Evolution (LTE), fourth generation (4G) cell phone technology. Secondly, because of the rapid rate of adoption of this technology. 

According to the Global mobile Suppliers Association “LTE is the fastest developing mobile system technology ever”. The United States is the largest LTE market in the world. By March 2013 the global total of LTE subscriptions was already 91 million subscribers. Over half of these, 47 million, were American 4G subscribers. 



Cell Phone Exposures and Disease 

This study establishes that short-term exposure to LTE radio frequency radiation affects brain activity. The long-term effects of these exposures have yet to be studied but there is already considerable evidence linking these exposures to a myriad of adverse biological effects including: 
  • Sperm damage
  • DNA breaks
  • Increased glucose in the brain
  • Weakened bones
  • Genetic stress
  • Immune system dysfunction
  • Effects on unborn children
More worrying is the link between these exposures and a long list of diseases such as: 
  • Alzheimer’s disease
  • Autism
  • Brain Tumors
  • Breast cancer
  • Brain cancer
More research is needed on the effects of LTE and other forms of cell phone radiation but the evidence is already compelling. Many scientific and medical experts are sounding the alarm. 

Monday, 10 October 2016

History Of Internet..!

Credit for the initial concept that developed into the World Wide Web is typically given to Leonard Kleinrock. In 1961, he wrote about ARPANET, the predecessor of theInternet, in a paper entitled “Information Flow in Large Communication Nets.” Kleinrock, along with other innnovators such as J.C.R. Licklider, the first director of the Information Processing Technology Office (IPTO), provided the backbone for the ubiquitous stream of emails, media, Facebook postings and tweets that are now shared online every day. Here, then, is a brief history of the Internet:
Partial map of the Internet based on the Jan. 15, 2005, data found on opte.org. Each line is drawn between two nodes, representing two IP addresses. The length of the lines are indicative of the delay between those two nodes.
Partial map of the Internet based on the Jan. 15, 2005, data found on opte.org. Each line is drawn between two nodes, representing two IP addresses. The length of the lines are indicative of the delay between those two nodes.
Credit: Creative Commons The Opte Project
The precursor to the Internet was jumpstarted in the early days ofcomputing history, in 1969 with the U.S. Defense Department's Advanced Research Projects Agency Network (ARPANET). ARPA-funded researchers developed many of the protocols used for Internet communication today. This timeline offers a brief history of the Internet’s evolution:
1934: Belgian information expert named Paul Otlet imagined a “Radiated Library” that would use technology of the day — the telephone and radio — to create something very much like the Internet.
1965: Two computers at MIT Lincoln Lab communicate with one another using packet-switching technology.
1968: Beranek and Newman, Inc. (BBN) unveils the final version of the Interface Message Processor (IMP) specifications. BBN wins ARPANET contract.
A visualization of Internet connections in the United States. The lines represent connections between routers in major urban areas throughout the country.
A visualization of Internet connections in the United States. The lines represent connections between routers in major urban areas throughout the country.
Credit: NSF
1969: On Oct. 29, UCLA’s Network Measurement Center, Stanford Research Institute (SRI), University of California-Santa Barbara and University of Utah install nodes. The first message is “LO,” which was an attempt by student Charles Kline to “LOGIN” to the SRI computer from the university. However, the message was unable to be completed because the SRI system crashed.
1972: BBN’s Ray Tomlinson introduces network email. The Internetworking Working Group (INWG) forms to address need for establishing standard protocols.
1973: Global networking becomes a reality as the University College of London (England) and Royal Radar Establishment (Norway) connect to ARPANET. The term Internet is born.
1974: The first Internet Service Provider (ISP) is born with the introduction of a commercial version of ARPANET, known as Telenet.
1974: Vinton Cerf and Bob Kahn (the duo said by many to be the Fathers of the Internet) publish "A Protocol for Packet Network Interconnection," which details the design of TCP.
1976: Queen Elizabeth II hits the “send button” on her first email.
1979: USENET forms to host news and discussion groups.
1981: The National Science Foundation (NSF) provided a grant to establish the Computer Science Network (CSNET) to provide networking services to university computer scientists.
1982: Transmission Control Protocol (TCP) and Internet Protocol (IP), as the protocol suite, commonly known as TCP/IP, emerge as the protocol for ARPANET. This results in the fledgling definition of the Internet as connected TCP/IP internets. TCP/IP remains the standard protocol for the Internet.
1983: The Domain Name System(DNS) establishes the familiar .edu, .gov, .com, .mil, .org, .net, and .int system for naming websites. This is easier to remember than the previous designation for websites, such as 123.456.789.10.
1984: William Gibson, author of "Neuromancer," is the first to use the term "cyberspace."
1985: Symbolics.com, the website for Symbolics Computer Corp. in Massachusetts, becomes the first registered domain.
1986: The National Science Foundation’s NSFNET goes online to connected supercomputer centers at 56,000 bits per second — the speed of a typical dial-up computer modem. Over time the network speeds up and regional research and education networks, supported in part by NSF, are connected to the NSFNET backbone — effectively expanding the Internet throughout the United States. The NSFNET was essentially a network of networks that connected academic users along with the ARPANET.
1987: The number of hosts on the Internet exceeds 20,000. Cisco ships its first router.
1989: World.std.com becomes the first commercial provider of dial-up access to the Internet.
1990: Tim Berners-Lee, a scientist at CERN, the European Organization for Nuclear Research, develops HyperText Markup Language (HTML). This technology continues to have a large impact on how we navigate and view the Internet today.
1991: CERN introduces the World Wide Web to the public.
1992: The first audio and video are distributed over the Internet. The phrase “surfing the Internet” is popularized.
1993: The number of websites reaches 600 and the White House and United Nations go online. Marc Andreesen develops the Mosaic Web browser at the University of Illinois, Champaign-Urbana. The number of computers connected to NSFNET grows from 2,000 in 1985 to more than 2 million in 1993. The National Science Foundation leads an effort to outline a new Internet architecture that would support the burgeoning commercial use of the network.
1994: Netscape Communications is born. Microsoft creates a Web browser for Windows 95.
1995: Compuserve, America Online and Prodigy begin to provide Internet access. Amazon.com, Craigslist and eBay go live. The original NSFNET backbone is decommissioned as the Internet’s transformation to a commercial enterprise is largely completed.
1996: The browser war, primarily between the two major players Microsoft and Netscape, heats up. CNET buys tv.com for $15,000.
1997: PC makers can remove or hide Microsoft’s Internet software on new versions of Windows 95, thanks to a settlement with the Justice Department. Netscape announces that its browser will be free.
1998: The Google search engine is born, changing the way users engage with the Internet.
1999: AOL buys Netscape. Peer-to-peer file sharing becomes a reality as Napster arrives on the Internet, much to the displeasure of the music industry.
2000: The dot-com bubble bursts. Web sites such as Yahoo! and eBay are hit by a large-scale denial of service attack, highlighting the vulnerability of the Internet. AOL merges with Time Warner.
A newly expanded global Internet, to focus solely on science and education, now includes half of the world's countries. The high-speed fiber-optic network connects users at speeds of 10 Gbps.
A newly expanded global Internet, to focus solely on science and education, now includes half of the world's countries. The high-speed fiber-optic network connects users at speeds of 10 Gbps.
Credit: GLORIAD.
2001: A federal judge shuts down Napster, ruling that it must find a way to stop users from sharing copyrighted material before it can go back online.
2003. The SQL Slammer worm spread worldwide in just 10 minutes. Myspace, Skype and the Safari Web browser debut.
2004: Facebook goes online and the era of social networking begins. Mozilla unveils the Mozilla Firefox browser.
2005: YouTube.com launches.
2006: AOL changes its business model, offering most services for free and relying on advertising to generate revenue. The Internet Governance Forum meets for the first time.
2009: The Internet marks its 40th anniversary.
2010: Facebook reaches 400 million active users.
2011: Twitter and Facebook play a large role in the Middle East revolts.