Fighting Fit: Implantable Electronics Set To Improve Quality of Life by Countering Chronic Conditions
Fighting Fit: Implantable Electronics Set To Improve Quality of Life by Countering Chronic Conditions
Fighting Fit: Implantable Electronics Set To Improve Quality of Life by Countering Chronic Conditions
Global Authorized Distributor of Electronic Components
450+ Industry-Leading Suppliers
No Minimum Order and Same-Day Shipping
World-Class Local Language Technical Support
17 Website Languages and Currencies Available
2 Million Parts Online
mouser.com
United Kingdom
Suite C, First Floor, Artisan Building
Hillbottom Road, High Wycombe Bucks. HP12 4HJ
T: +44 (0) 1494-467490 | F: +44 (0) 1494-467499
uk@mouser.com
AS9120A Certified Distributor
mouser.com
Distributing semiconductors and electronic
components for design engineers.
Authorised Distributor
Comment
T
here is no denying ARMs success over the years; from its legendary early days in a
converted turkey shed to today, where its IP is found in billions of devices. And yet
questions are being asked, if only quietly, about where the company goes from here.
In essence, the ARM business model of developing IP, rather than hardware, makes
sense. But as ARM freely admits, its IP holds 95% of some available markets. To produce
the growth which investors want to see means new markets need to be targeted. If you
take a look at ARMs latest annual report, it gives some hints about which markets
those might be. Digital tvs is one, where it claims a 35% share; microcontrollers is
another, with ARM saying its cores appear in only 10% of devices shipped.
Its not just about shipping more cores, however. The company also needs to
develop different kinds of IP so it can supply more of the elements that comprise an
SoC. Its making moves here, with the development of media processors.
There is one area to which the ARM architecture is eminently suited: servers. The
ARM architecture has three basic planks processor performance, power
consumption and silicon area. Data centres need all of these attributes yet, until the
end of October, the company didnt really have an offering. The reason? Data centre
applications are almost entirely 64bit; the ARM architecture was 32bit. But version 8
of the ARM architecture addresses that.
Data centre servers are powered almost entirely by Intel processors. If ARMs
licensees can produce chips which offer significant power and performance benefits
not to mention price the doors will open on a new market.
But servers alone wont bring the growth which ARM desires; for this, it must still
rely on its licensees developing devices which meet the needs of customers in a
range of other markets.
Graham Pitcher, Group Editor (gpitcher@findlay.co.uk)
Group Editor: Graham Pitcher
Web Editor: Chris Shaw
Deputy Web Editor: Laura Hopperton
Contributing Editors:
David Boothroyd, Chris Edwards,
Louise Joselyn, Roy Rubenstein
Art Editor: Martin Cherry
Illustrator: Phil Holmes
Key Account Director: Tricia Bodsworth
Classified Sales: James Slade
Circulation Manager: Chris Jones
(circulation@findlay.co.uk)
Production Controller: Nicki McKenna
Publisher: Peter Ring
Executive Director: Ed Tranter
Represented in Japan by:
Shinano International: Kazuhiko Tanaka,
Akasaka Kyowa Bldg, 1-6-14 Akasaka,
Minato-Ku, Tokyo 107-0052
Tel: +81(0)3 3584 6420
New Electronics: Tel: 01322 221144
Fax: 01322 221188
www.newelectronics.co.uk
email: ne@findlay.co.uk
ISSN 0047-9624
New Electronics, incorporating
Electronic Equipment News and Electronics
News, is published twice monthly by
Findlay Media Ltd,
Hawley Mill, Hawley Road,
Dartford, Kent, DA2 7TJ
Copyright 2011 Findlay Media.
Annual subscription (22 issues)
for readers in the UK is 106,
overseas is 161, and airmail is 197.
Origination by
CTT, Walthamstow, London E17 6BU
Printed in England by
Wyndeham Heron Ltd, Heybridge, CM9 4NW
Moving on? If you change jobs or your
company moves, please contact
circulation@findlay.co.uk to continue
receiving your free copy of New Electronics
9 November 2011 5
www.newelectronics.co.uk
Building for
the future
ARM extends its architecture to embrace
64bit processing
www.newelectronics.co.uk/forum
Your electronics community
discussion board
Isolated RS485 + 1W Power
, LTC, LT, LTM and Module are registered trademarks and
Isolator logo is a trademark of Linear Technology Corporation.
All other trademarks are the property of their respective owners.
The LTM
2881 is an isolated RS485 transceiver that guards against large ground-to-ground differentials. The LTM2881s
internal inductive isolation barrier breaks ground loops by isolating the logic level interface and line transceiver. An onboard
DC/DC converter provides power to the transceiver with an isolated 5V supply output for powering additional system circuitry.
With 2500V
RMS
galvanic isolation, onboard secondary power and a fully compliant RS485 transmitter and receiver, the LTM2881
requires no external components and provides a small, complete Module solution for isolated serial data communications.
Isolator Module Technology
Isolated RS485/RS422 Transceiver:
2500V
RMS
Integrated Isolated, 1W DC/DC Converter
- Good Efficiency (up to 62%)
- Low EMI
3.3V or 5V Input Supply Voltage
(LTM2881-3/LTM2881-5)
20Mbps or Low EMI 250kbps Data Rate
High ESD: 15kV HBM
Common Mode Transient Immunity:
>30kV/s
Integrated Selectable 120 Termination
Small Footprint, Low Profile
(11.25mm x 15mm x 2.8mm) in
Surface Mount LGA & BGA Packages
LTM2881 Demo Board
3.3V or 5V
5V
RS485/RS422
Bus
RO
DI
ON
REG
Galvanic Isolation
Isolated Power
LTM2881
Complete 20Mbps Module
SUDDEN SERVICE
8kMTL6 hITL kIhM TaI. 44 01Z86 789Z9Z Fax. 44 01Z86 7Z7118 LmaII. sroIIand@samIar.rom
News Engi neeri ng Desi gn Show 2012
The UK employs 106,722 design engineers in the
electronics, mechanical and electromechanical
disciplines at 12,981 sites across the full range of
industry sectors. And yet there is currently no
standalone exhibition or industry event designed
specifically to meet their needs for information,
advice and above all innovation.
At a time when it has never been more important
for design engineers to have access to the latest
information, advice and technological developments,
this seems particularly strange. The increasing need
for engineers to operate across a variety of
technological sectors and to incorporate ever more
advanced technology into their designs makes the
importance of getting to grips with the latest
products and techniques crucial.
Meanwhile, constraints on the time available to
engineers have made it increasingly difficult for
them to attend exhibitions. And, if those
exhibitions focus purely on one
technology or market sector, justifying
that time can be even more difficult. As
time becomes more precious, the need
for a show that runs the technological
gamut under one roof would appear to
have become more pressing.
This was borne out by research
undertaken by New Electronics and
Eureka Magazines in July 2011. A
significant sample of the two magazines
audiences was surveyed to identify the
potential for an engineering design-
focused event. Of 600 interviews carried
out, 68% of New Electronics readers and
75% of Eureka readers said they would be
interested in attending a design engineering
focused exhibition and conference.
And so New Electronics parent company Findlay
Media is filling this longstanding gap; announcing the
launch of The Engineering Design Show, which will
take place on 11 and 12 October 2012 in the Jaguar
Exhibition Hall, Ricoh Arena, Coventry.
Findlay Medias Executive Director Ed Tranter
said: The research simply confirmed for us that The
Engineering Design Show will offer something
unique and valued in the market: an exhibition
catering specifically for design engineers,
regardless of the industry in which they work.
The New Electronics and Eureka reader surveys
indicated that content is all important. The
Engineering Design Show will feature two
workshop theatres and a conference area covering
up to 36 sessions across the two days, with
content guided by the reader survey results and
further research being undertaken.
Each workshop theatre will offer visitors practical
and hands-on content in the form of case study
presentations by leading technology experts. This
area will be free for visitors to attend and each
workshop session is to be promoted individually
with a full synopsis of what visitors can expect to
learn. The conference theatre will carry a small
delegate fee and tickets will need to be purchased in
advance. The conference area and seminar theatres
are deliberately located within the exhibition hall to
maximise the benefit to exhibitors.
The conference will feature topics of direct
relevance to the electronic design engineers and be
led by experts in their technological fields. The topics
chosen will reflect information provided in New
Electronics 2011 survey. More details of the
conference and workshop sessions will
be forthcoming over the coming months.
Tranter added: A good conference
programme is one of the keys to a
successful exhibition. We realise that
we need to offer visitors more than just
an exhibition; we have to give them
technical content that is tailored
specifically to their professional needs
as design engineers.
Ultimately, the Engineering Design
Show will stand or fall by the quality of its
exhibitors and, of course, its visitors. By
offering a showcase for leading names in
the field, The Engineering Design Show
promises to deliver an event that design
engineers can truly call their own.
Better by Design
In October 2011, New Electronics parent company Findlay Media will launch The
Engineering Design Show. Eurekas editor Paul Fanningpreviews this exciting new event.
8 November 2011 12 www.newelectronics.co.uk
To register your interest as an attendee or an exhibitor, visit:
www.engineeringdesignshow.co.uk
or call Luke Webster on 01322 221144
More Density
vicorpower.com/rp/density/new_electronics
Powers More Functionality in Less Space
Density drives product performance and efficiency. For decades,
we've delivered high-density power management products
that enable customers to pack more functionality in less space.
We set the benchmark at 1,300 W/in
3
. This enables our customers
to deliver category-changing products in diverse markets
from supercomputers, defense and aerospace to communications,
industrial and transportation.
Don't let power be the limiting factor in your design and product vision.
Efficiency. Flexibility. Density.
Trust your product performance to Vicor.
Jamie Urquhart
Following a BSc in Physics and Physical Electronics from
Bath University, Urquhart joined Plessey Research, where
he worked on analogue and digital products based on high
speed cmos, bipolar and GaAs.
He joined Acorn Computers in 1984, where he managed the
vlsi design group, before cofounding ARM in 1990. At ARM, he
held a number of roles, including chief operating officer.
Leaving ARM in 2002, he became a venture partner with
Pond Ventures and held board level positions with a number
of companies. He is currently a non executive director of
picoChip.
8 November 2011 14 www.newelectronics.co.uk
Interview Jamie Urquhart
L
ike many in the electronics industry, Jamie Urquhart caught the
electronics bug at an early age. In his case, it was using a Philips
electronics kit at age 8 to build a radio.
Following a physics degree at Bath, Urquhart worked at Plesseys
Caswell research centre. It was a fantastic experience, he said. Its great
to go to work where you love to go to work. Having designed a chip and
taken it into production, he joined Acorn Computers as a chip designer. It
had just moved into the silver building and there was
a fantastic buzz. We knew if we didnt get chips right, it
would kill the company. But he also found that Acorn
was more focused on computers than chips and the
design department drifted apart. We decided to do
something, he recalled. Three of us put a business
plan together and VLSI Technology introduced us to
Apple as a potential investor. Within six months, I was
general manager of ARM.
That was the start of a career which has taken him
to his current role as a venture partner with Pond
Ventures, as well as an entrepreneur in residence at
the Judge Business School and board member of a
number of companies, including picoChip.
His latest challenge is to produce a report on the
future of the UKs Electronics Systems community;
something about which he was sounded out by Derek Boyd and Ian Phillips
of NMI. What does Urquhart think about the sector? The electronic system
community is hugely pervasive, he claimed, and will become even more
so. But developers need to improve the quality of life, he pointed out.
Think about the Apple approach. What it has done is to develop products
which do just that and this aspect will be very important in the future,
particularly for healthcare.
Urquhart sees the electronic systems sector as only getting bigger and
wants to ensure the UK gets its fair share of the business. Some might say
the UK has had an interesting ride over the years, he suggested. For
example, weve seen Plessey grow and fade and similar stories. Nevertheless,
a lot of UK developed technology finds its way into pervasive products.
And it is because the sectors importance that Urquhart believes a
strategic view must be taken of what it represents. Its important for its
capabilities and for the value which it generates, he continued. But it is
also important for the way in which it will be able to improve peoples lives.
However, Urquhart doesnt want to produce a report which is just taken
to Government. Im interested in addressing challenges, he asserted, and
to tap into the need to solve problems.
One impression he has of the sector is a lack of ambition. Thats my
view, he clarified, and its not uniform across the sector, but I meet start
ups who are only looking for a medium level of success, if that. Too often,
companies entertain ideas, rather than challenge them.
Another big issue is marketing. The UK has produced is producing
good engineers, but we dont appear to have sufficient product marketing
expertise to help in the product definition and value capture aspects, he
reflected. But this report will not be about what I
think; it will be about identifying the problems,
identifying the opportunities and how the sector can
be driven forward.
He is also aware of the need to involve recent
graduates in the process. We want to have more
inclusion and to take input from a range of sources.
We need to get people to get involved.
A further area of investigation will be the
relationship between R&D and exploitation. Think
about how much money goes into R&D, he
suggested. Then ask yourself whether we exploit this
research effectively? Is there the right marketing
expertise to bring it to market?
Overall, the report is likely to focus on the things
that can be done better. We can always do things
better, Urquhart believes, and part of that is the need for a longer term
vision. While the industry really needs to think about the next couple of
decades, management is usually focused only on the next quarter.
He also sees the need for engineering careers to get a makeover.
Often, engineering doesnt give the feeling that it is creative. He used ARM
as an example. ARM has a creative process. It brings out new ideas; not all
of them work, but it exploits the ones that do. That process is important and
I believe is one which will attract people into the industry.
Having said that, he admitted that its a huge challenge for kids to decide
what they want to do. And he wants industry to get more involved with
shaping the education system. If companies arent happy with standards,
then stop moaning and get involved.
In the end, Urquhart said chairing the report attracted him because it
was strategic. The electronic systems sector needs to be seen by
Government as strategic, but this wont happen if the industry carries on
with its business as usual approach. Change is possible, he concluded,
and the time is ripe for a new level of engagement.
If you would like to contribute to the report, contact ESReport@nmi.org.uk
Change for the good
Jamie Urquhart tells GrahamPitcher the time is right for the UKs electronic
systems sector to embrace change.
The UK has produced is
producing good
engineers, but we dont
appear to have sufficient
product marketing
expertise to help in the
product definition and value
capture aspects.
8 November 2011 15 www.newelectronics.co.uk
P
h
o
t
o
:
C
h
a
r
l
e
s
M
i
l
l
i
g
a
n
D
espite the challenges of designing for environments that are typically
inhospitable to conventional electronics, research teams from around
the world are developing novel methods for the continuous, real time
monitoring and sensing of a range of chronic diseases.
These advances, according to Dr Timothy Constandinou, deputy director of
the Centre for Bio-Inspired Technology at Imperial College London, are
providing patients with a more portable, precise and personal way of
managing their illness than ever before.
Weve come a long way since the days of the humble pacemaker, he
noted. Advances in biomedicine and information and communications
technologies have enabled the healthcare industry to move towards a smarter,
more decentralised approach centred not on the physician, but the patient.
Our research involves a strong combination of integrated miniature
sensing with intelligent processing, leveraging on state of the art
semiconductor technology. We aim to make electronics work with biological
processes, while still remaining small and consuming tiny amounts of energy.
Perhaps most significant of all about Imperials research is that it relies
heavily on a biologically inspired approach. Dr Constandinou explains: This
means that, rather than take a problem and engineer a mathematical solution,
we say to ourselves how does the body do it? and then model some
electronics around that.
Dr Constandinou and his fellow biomedical engineers recently completed
work on an artificial pancreas, which they believe has the potential to close
the loop on Type 1 diabetes.
The bionic system comprises an electrochemical sensor that monitors
Fighting
fit
Leading edge semiconductor technology is
enabling the development of smaller and
more powerful medical devices for use
inside the human body.
By Laura Hopperton.
8 November 2011 16 www.newelectronics.co.uk
Below: Imperials neural interface converts analogue signals recorded from
microelectrodes implanted in the brain into a stream of digital spike events
Right: The biocompatible IntelliTuM relies on a self calibrating sensor to measure oxygen
levels in the blood; a key indicator of tumour growth
blood sugar levels continuously; a chip that mimics the unique electrical
characteristics of alpha and beta cells in the human pancreas; and two
small pumps worn on the body.
In a patient with Type 1 diabetes, the bodys immune system attacks
and kills the insulin, secreting beta cells and causing an increase in blood
glucose, explained Dr Constandinou. Over time, the glucagon secreting
alpha cells also tend to fail, so people with Type 1 diabetes become prone to
episodes of extremely low blood sugar.
As such, we designed the chips control algorithms to mimic the very
different behaviours of the two cell populations. An alpha cell tends to react
to rapid electrical events (spikes), while the beta cell tends to react in bursts of
voltage spikes, punctuated by low voltage silent periods that last for seconds
or even minutes. When glucose concentrations rise, the beta cells remain in
the high voltage burst state longer, secreting more insulin as a result.
Imperials bionic pancreas mimics this biological process by detecting the
users glucose level via a sensor every five minutes. If it reports a high level of
glucose, the silicon beta cell (pictured top right) generates a signal that drives
a motor.
This motor pushes a syringe, dispensing insulin into the tissue beneath
the skin until the glucose reading at the sensor drops. If the sensor reports a
low glucose value, the silicon alpha cell activates the second pump to
administer glucagon instead.
This biomimetic approach diverges from todays dominant method of
delivering only insulin using a relatively simple control system,
commented Dr Pantelis Georgiou, who led the project. The great thing about
our system is that it lets people with diabetes do
away with multiple insulin injections and administer
the insulin in a more biologically faithful way. This
reduces any secondary complications and means
patients no longer have to worry about what they eat
and drink.
Transmitting raw data through the skin barrier
In 2009, the Imperial team embarked on a project to
develop a brain-machine interface for patients with
spinal cord injury and neurological disorders. The ultra
low power cortical implant still under development
as part of a collaborative effort between Imperial and
the Universities of Newcastle and Leicester is designed to interface
between the central nervous system and low power, custom built digital
microelectronics.
The system works by converting analogue signals recorded from
microelectrodes implanted in the brain, into a stream of digital spike events.
The significance of this project over other research efforts, claims Dr
Constandinou, is that it overcomes the bottleneck of transmitting raw data
through the skin barrier.
Early implementations of brain-machine interfaces connected intracortical
electrodes to external amplifiers via wires passing through the skin, he
explained. This breaches the bodys natural barrier to bacterial infections,
compromising the implant and presenting a serious danger to patients.
Several groups have begun developing wireless neural links, but these
generally transmit all the
raw data that is recorded.
The problem with that is
it requires a relatively
high data rate. For
example, for a 2Mbit/s
wireless link, if the data
is sampled at
20ksample/s requiring
10bit/s, you can only
look at 10 channels.
Thats nothing when you
think that the brain has about 100billion neurons. By putting what Dr
Constandinou describes as local intelligence on the array, the Imperial
researchers were able to overcome this issue and sort the data before it was
transmitted out of the body.
This resulted in a huge data reduction, as only the timestamp and neuron
identifier needed to be transmitted, instead of the entire recorded waveform,
Dr Constandinou explained. For the first time, we were able to measure
thousands of neurons, not just tens.
Ultimately, this means we can manipulate multiple degrees of movement in
8 November 2011 17 www.newelectronics.co.uk
Cover Story Implantable Electronics
I
l
l
u
s
t
r
a
t
i
o
n
:
O
l
i
v
e
r
B
u
r
s
t
o
n
the human body, opening up the possibility of helping people with
neurological damage, amputees with prosthetic limbs and even the totally
paralysed. While the technology is expected to take another five years to
develop, Dr Constandinou believes it will form a key component of next
generation brain-machine interfaces.
When designing devices for inside the human body, Dr Constandinou
notes the importance of remaining miniature, whilst also consuming
extremely small amounts of power. It is also imperative, he says, to ensure
good biocompatibility and stability, to avoid patients having to undergo further
invasive surgery.
Apart from the risks of infection and other surgical complications,
electronic implants must withstand fluid leaks, mechanical stress and motion
artefacts, while operating reliably on a low power battery supply, he said.
Moreover, its a real challenge for designers to put everything together in a
small enough package that works reliably.
Unblocking a bottleneck
This bottleneck was a major stumbling block for researchers in Germany
developing a device that can monitor tumour growth in cancer patients. A
team from the Technical University of Munich recently unveiled a
biocompatible device, dubbed IntelliTuM (Intelligent Implant for Tumour
Monitoring), that relies on a self calibrating sensor to measure oxygen levels
in the blood; a key indicator of growth.
According to Professor Bernhard Wolf, who led the research, the growth
rate data measured by the sensor can be transmitted wirelessly to an
external receiver carried by the patient and transferred to their doctor for
remote monitoring and analysis.
We developed the device to monitor and treat slow growing tumours that
are difficult to operate on, such as brain tumours and liver tumours, and for
tumours in elderly patients for whom surgery might be dangerous,
explained Prof Wolf.
The main challenge for us was developing a sensor that functions entirely
autonomously for long periods of time. The device had to continue to function
and deliver correct values even in the presence of protein contamination or
cell debris. It also had to be invisible to the body so that it was not identified
as a foreign object, attacked and encapsulated in tissue.
Prof Wolf and his team are now planning to incorporate a miniature
medication pump into the device to deliver chemotherapy directly into the
tumour environment. However, even though it only measures 2cm, it still
needs to be further miniaturised before it can be deployed.
Prof Wolf believes the ultimate solution to this is for the device to employ
energy harvesting; a technology which Constandinou and his team are also
exploring at Imperial.
At present, our researchers are developing devices that can be powered
simply by a person walking or moving their head, Constandinou said.
Although the technology is currently a way off, and a device measuring about
1cm
3
can only give you a few microwatts, we believe it could be the main
power source for devices in just a few years.
Meanwhile, Dr Constandinou and his fellow engineers are working on
several other projects, which he says in the next 10 years could help people
with Parkinsons disease, Multiple Sclerosis, depression and even obesity.
The possibilities, he concluded, are endless.
www3.imperial.ac.uk/bioinspiredtechnology
8 November 2011 18 www.newelectronics.co.uk
Smart skin
An ultra thin electronic tattoo
that self adheres to human skin
to track muscle activity, heart
rate and other vital signs was
unveiled recently by researchers
at the University of Illinois.
The electronic patch, which
bends, wrinkles and stretches
with the mechanical properties of skin, has been demonstrated through an
array of electronic components mounted on a thin, rubbery substrate,
including sensors, leds, transistors, radio frequency capacitors, wireless
antennas and solar cells.
As well as offering advances in biomedical applications and wearable
electronics, the researchers believe the technology could one day help
patients with muscular or neurological disorders to communicate.
The team has already used the electronic patch to control a video
game, demonstrating the potential for human computer interfacing.
They now plan to add Wi-Fi capability to the device.
www.illinois.edu
Going soft
A new memory device that is soft, pliable and functions extremely well in
wet environments could open the door to a new generation of
biocompatible electronic devices.
According to Dr Michael Dickey, an assistant professor of chemical and
biomolecular engineering at North Carolina State University, the memristor
like device has the physical properties of gelatin and works in
environments hostile to traditional electronics.
The prototype module has two states: an on state in which it is
conductive, and an off state in which it is resistive. These states can be
controlled by the thickness of an oxide skin that forms on the liquid metal.
The devices ability to function in wet environments, and the
biocompatibility of the gels, mean that this technology holds promise for
interfacing electronics with biological systems such as cells, proteins,
enzymes and even tissue, noted Prof Dickey. These properties may
allow it to be used for biological sensors or for medical monitoring.
www.ncsu.edu
Cover Story Implantable Electronics
company announcement altera
8 November 2011 20 www.newelectronics.co.uk
User-customisable ARM-based
SoC devices
FPGA with on-chip ARM core heralds new age of low-cost, fast-turn, single-chip embedded
systems. Author: Todd Koelling, senior manager, embedded products, at Altera.
I
n todays highly competitive
market, embedded systems
designers need to re-
examine their design and
development process.
Designers not only need to
create much more complex
systems, but also need to be
able to very rapidly turn new or
derivative designs. One market
development that is actually
advantageous to designers is
the emergence of the ARM
processor as the dominant platform for
embedded systems. As a result, a growing
number of ARM-based solutions have emerged to
address this market, but with various tradeoffs.
Multichip solutions are relatively easy to
implement, but are costly and often lack the
flexibility and performance/power designers
need.
Single-chip solutions employing soft processor
cores are also relatively easy to implement, but
are limited in their performance.
ASIC SoCs with hardened ARM cores offer
excellent power, performance, and
optimisations, but are slow to market (due to
development times), inflexible, and too costly for
the vast majority of applications.
A hardened ARM core on an FPGA-based
implementation, with its low cost and fast time-
to-market, offers an intuitively attractive
alternative to multi-chip and ASIC SoC options
(Figure 1). For example the ARM-based SoC FPGA
from Altera tightly couples a highly optimised
hard processor system (HPS) with an on-chip
FPGA. The HPS, which includes the dual-core ARM
processor, multiport memory controller and
multiple peripheral elements, offers up to 4,000
DMIPS (Dhrystones 2.1 benchmark) of
processing performance for under 1.8 W. These
hard IP blocks offer high performance while
lowering power and cost, and freeing up logic
resources for product differentiation. On-chip
FPGA fabric can be customised by the designer
to create application-specific logic.
Application example: Next-Generation Drives
In a conventional drive design multiple devices
would be required to provide a full system
solution. A digital signal processor (DSP) would
be used to perform central control functions, a
networking ASIC for the networking protocol(s),
and an FPGA for additional functionality,
including safety. In a SoC FPGA implementation
all these elements are integrated onto a single
chip.
With a single-chip the gains in performance
and power are significant. The SoC FPGA offers a
20-fold control loop rate improvement over the
multi-chip solution, from 100s to 5s. This
translates into significantly improved power
efficiency, which can account for 90% of the total
cost of operation of the drive.
The power consumed by the
SoC is estimated to be 37%
lower than for its three-chip
counterpart, however more
importantly, the efficiency of
the drive will have a more
significant affect on reducing
overall power.
By combining three or more
devices onto one chip, the bill of
materials for the system can be
reduced; board space is also
reduced by 57%. In addition, more functionality
can be realised at a lower cost. The SoC can
easily support two motors, whereas the multi-
chip option supports only one. Combining
support for two motors on one chip delivers 53%
lower cost than duplicating the multi-chip device
configuration for each motor
Design teams deploying FPGA SoC technology
can capitalise on its significant productivity and
competitive advantages. Hard IP elements
deliver the best performance, lowest power and
highest density possible, while on-chip FPGA
fabric offers the ability to rapidly differentiate,
augment and customise functionality, during
design or in the field.
The field-programmable platform, combined
with highly automated and well-supported
design and software development tools, enables
the design team to develop a custom SoC, using
off-the-shelf devices, in a fraction of the time of
ASIC or multi-chip devices. The resulting design
is flexible, scalable, and re-usable, making it
possible for the team to rapidly adapt and
respond to new markets, changing standards
and faster process nodes, as well as maintain
products with long field life.
Figure 1 Hard Processor System (HPS) SoC
Sector Focus Research UK
T
he UK is setting the pace in the
race to develop and commercialise
graphene and this lead looks set to
continue with Chancellor George
Osborne pledging 50million for a
Global Research and Technology Hub.
Indeed, Universities and Science
Minister David Willetts believes the
wonder material has the potential to
drive UK economic growth.
Graphene was discovered at the
University of Manchester by Professors
Andre Geim and Konstatin Novoselov. As
a result, the two were awarded the 2010
Nobel Peace Prize in Physics. The one
atom thick material is considered by
many to be a natural successor to
silicon and, since Profs Geim and
Novoselovs discovery, the University of
Manchester has intensified its research.
Dr Leonid Ponomarenko, from the
University of Manchesters School of
Physics and Astronomy, specialises in
the electronic properties of graphene
and is working on a new technique to
control the material in a way previously
considered impossible.
By sandwiching two sheets of
graphene with boron nitride, another
two dimensional material, his team has
developed a four layered structure.
Because two layers are completely
surrounded by boron nitride, its been
possible for the first time to observe
how graphene behaves when
unaffected by the environment and how
it reacts when encapsulated by another
material. As a result, new phenomena
such as metal insulator transition can
be observed.
We observed that graphenes
properties dont change over time, said
Ponomarenko. And thats the beauty of
it. But work still needs to be done to see
if other materials can do the same job.
Graphene is sensitive to its environment
and protecting it from both sides is
important if we want to keep its
properties under control.
The team worked with boron nitride
due to its similarity to graphite and the
fact it can be peeled with sticky tape
down to a single atomic layer. Dr
Ponomarenko noted: Boron nitride is a
very good insulator, while graphene is an
extremely good conductor of electricity.
The surface of boron nitride is atomically
flat, which, in combination with its
insulating properties, makes it a perfect
material as a substrate for graphene. In
my opinion, if graphene electronics
become a reality, there is no better
substrate material than boron nitride on
which to fabricate a graphene chip.
Nanoribbons have great potential
Nanoribbon research is the focus of Dr
Andrei Khlobystov from the University of
Nottinghams School of Chemistry, who
believes nanoribbons have great
potential in the production of
nanomaterials for use in next generation
computers and data storage devices.
One of the real problems with graphene
is that it has no electronic band gap; if
you want to make a transistor, you
should be able to turn it on and off, he
said. Although you can manipulate
Leading the field
Attention is focusing on how to commercialise graphene
research underway around the UK. By Chris Shaw.
Dr Andrei Khlobystov is
researching graphene
nanoribbons at the
University of
Nottingham
8 November 2011 21 www.newelectronics.co.uk
Sector Focus Research UK
graphene to some extent, it is always on
so its not a good transistor. Once you
start cutting a 2d sheet of graphene into
ribbons, its possible to develop a real
electronic band gap, so nanoribbons
provide realistic opportunities to
introduce graphene into electronics.
Currently, nanoribbon preparation
involves taking a piece of graphene and
cutting it with an electron beam or
chemical etching. However, this means
its only possible to make one nanoribbon
at a time and results in poorly defined
edges. We needed a new technique to
enable mass production and create
atomically smooth edges, said Dr
Khlobystov. Our new method addresses
both issues.
The team put molecules containing
carbon and other elements in a nanotube
and used this as a template to limit the
growth structure in two dimensions, while
allowing growth in one dimension. You
cant form a 2d sheet of graphene, noted
Dr Khlobystov, If you simply put carbon
atoms in a nanotube, provide lots of
energy and let them rearrange into a
ribbon, you only get another nanotube
within the first nanotube. So another
element must be added to attach itself to.
The team tried a number of elements,
but the only one to work was sulphur.
Sulphur attaches itself along the edge
of the ribbon, stabilises it and allows
nanoribbon formation instead of the
tube within the tube which would
normally happen. Once the molecules
containing carbon and sulphur break
down into individual atoms and
reassemble into the ribbons, the
nanoribbons have atomically smooth
edges.
because electrons travelling through it
are not disturbed as often by joins
between flakes.
Obtaining growth at this temperature
is a big step forward, Weatherup
observed. The main benefit of reducing
the growth temperature is that we can
then grow graphene directly on to
materials such as plastics, which are
damaged by higher growth temperatures.
This could open the gateway to flexible
electronics.
Nothing less than revolutionary
So, once commercialised, how do the
researchers believe graphene will change
the electronics industry? Dr Ponomarenko
describes the material as nothing less
than revolutionary. The size of an
individual transistor can be drastically
reduced, probably down to 10 atoms
across if its made of graphene, he said.
This will revolutionise electronics, making
it more powerful and much faster.
According to Dr Khlobystov, the
biggest hurdle is the transition from
cutting edge research into mass
production. Its a job for engineers to
think about, he noted. Carbon
nanostructures have a whole spectrum
of exploitable properties. Carbon is cheap,
abundant and, when we get the
technology right, it could replace almost
all metal in electronics critical, as we
are running out of rare metals.
Weatherup believes graphenes first
large scale commercial use will be as a
transparent conductor for flexible touch
screens replacing the expensive and
brittle indium tin oxide. I then expect it
will see use for individual high
performance electronic components
such as high frequency transistors used
in microwave electronics which could be
integrated with existing silicon
electronics, he added.
Given the proven track record of
silicon electronics it seems unlikely
graphene will replace it, but more likely
that graphenes amazing properties will
be integrated with existing silicon based
devices to add new functionality and
performance.
Structural diagrams
showing a sulphur-
terminated graphene
nanoribbon
encapsulated within a
carbon nanotube
A representation of
graphene growing on
alloy catalysts using
chemical vapour
deposition
Credit: Robert Weatherup
Catalysing the process
Robert Weatherup, part of the Hofmann
research group within the University of
Cambridges Department of Engineering,
is looking to grow larger areas of
graphene using chemical vapour
deposition. With this, a catalyst film is
exposed to a carbon-containing gas at
elevated temperatures. Graphene
assembles on the catalyst surface.
We use alloys as the catalyst film,
said Weatherup. This means we can tune
the graphene growth by tuning the
catalyst alloy and thus achieve high
quality monolayer graphene growth at
low temperatures.
By adding a tiny amount of gold to the
surface of a nickel film, graphene could
be grown at 450C, rather than the
1000C normally required. At the higher
temperature, many of the materials used
in electronics manufacturing can be
damaged, so graphene cant be
integrated directly. By using gold, the
number of places where graphene grows
on the film is reduced because the alloy
blocks its growth. As each graphene flake
emerges, it grows larger and for longer
before it joins with another flake. The
conductivity of the graphene is improved
8 November 2011 22 www.newelectronics.co.uk
2011 National Instruments. All rights reserved. LabVIEW, National Instruments, NI, and ni.comare trademarks of National
Instruments. Other product and company names listed are trademarks or trade names of their respective companies. 2785
>>
Find out how LabVIEW
can make you better at
ni.com/labview/better
01635 517300
uk.ni.com
info.uk@ni.com
JOIN THE CONVERSATION: Follow us on Search niukie
NEW: LabVIEW 2011 now released.
Find out more at ni.com/labview/whatsnew
NI LabVIEW
Name
Dr. Christian Altenbach,
Certied LabVIEW
Associate Developer
Job Title
Research
Ophthalmologist
Area of Expertise
Biophysics
LabVIEW Helped Me
Analyse and visualise
data interactively using
custom algorithms
Latest Project
Mapping molecular
structure changes
during activation of
the light receptor,
rhodopsin
LabVIEW makes me better because I can
the way I think
LabVIEW 2011
Now Available
8 November 2011 23 www.newelectronics.co.uk
www.totallyengineering.com/ne
Part of the totally engineering network
Sales: Darren Wright 01621 813393
For more information on the following jobs enter the reference No. on... www.totallyengineering.com/ne
Power Supply Design
Engineer - HV PSUs
Berkshire
Type: Permanent Salary/Rate: To 40k per
annum + good bonus
We are looking to recruit an engineer with
around 5 years (or plus) experience of high
voltage power supply design, preferably to at
least 30kV.
In addition, any experience of high voltage
transformer design/optimisation and HV PCB
layout/construction would be useful (either to
do own layout or advise a PCB layout
engineer).
And the icing on the cake... hopefully, an
understanding of the safety agency
requirements (preferably UL 61010).
For full details online
enter reference:
JS7286AL
PCB Inspector
Location: Huddersfield, West Yorkshire
Type: Contract
Salary/Rate: 6.15 per hour - 37.5 hours per week
Job Details: This company, based in
Huddersfield, is looking for a full-time
electronics/PCB inspector. The successful
candidate must have previous experience in
working with PCB components and testing PCBs.
Also required are:
Good general assembly skills
Wiring/soldering experience.
The client offers:
Working for a stable and growing company
Excellent working environment
Chance of a permanent role for the right
candidate.
The client is looking to start interviewing ASAP.
For full details online
enter reference: JS-.WYJSPCBA11
Electrical
Design/PLC Software
Engineer - SCADA
Location: Wallsend, Tyne & Wear
Type: Permanent
Salary 35k-38k per annum
(DOE), plus private healthcare,
Job Details: If you're a talented
electrical design engineer, with
an excellent working knowledge
of PLC and SCADA systems, this is
a first-class opportunity to
work on leading-edge
engineering projects.
You will participate in the design
and development of software
for our client's existing and
future range of products. You
will assist in the continuous
development of our client's
control systems to maintain
market lead. Involved in all
aspects of projects, from
specification through system
design, build, test, commissioning
and customer acceptance, you
will also provide field support.
To be considered for this
position, you must have: Machine
control systems and automation
experience; Solid experience of
PLC and SCADA programming; At a
minimum, 2nd Class Degree in a
relevant engineering subject.
For full details online
enter reference:
JS/301036-SOIED2211/DS
Power and Analogue
Electronics Engineer
Location: Loughborough Type: Permanent
Unique opportunity to work in a thriving business
in the field of high voltage generators and power
management systems.You will provide analogue
electronic and power expertise to support and
maintain existing systems, and also R&D to
develop new systems in the Power Management
and Control Cubicle side of the business.
Essential Skills:
Degree in Electronic and Electrical Engineering
or equivalent Analogue electronics circuit design
and development experience Power Electronics
design and development experience.
Desirable Skills:
C++/VB programming desirable but not
essential Knowledge of power generation
desirable, but not essential.
We....
Understand your specific requirements and
career aspirations
Guarantee to keep you updated
Provide full pre-interview preparation and post-
interview de-briefing
Guarantee not to approach any company without
consulting you first!
For full details online
enter reference: JSFP1415
Wireless Graduates/Junior
Engineers
Location: Cambridge Type: Permanent
Salary/Rate: 25-30k per annum + benefits
Currently 20 staff and growing rapidly, it's an
exciting time to join this fledgling start-up,
developing a new wave of wireless machine to
machine (M2M) network devices. Current
projects include radio firmware, definition
and modelling of low-level air protocols, DSP
algorithms, modelling large-scale behaviour
and server/database architecture.
We would like to hear from graduates and
junior engineers with knowledge and
experience in areas such as: DSP algorithm
development, protocols or Embedded
software for communication systems.
Knowledge of radio propagation or
communication systems algorithms would be
an advantage.
This role offers the opportunity for recent
graduates to gain considerable knowledge
and experience within a small, highly
experienced and entrepreneurial team, while
working on exciting and challenging
technologies.
For full details online
enter reference:
JSV7448JHGRAD
ASIC Verification Engineer
- Microprocessors
Location: Cambridgeshire
Type: Permanent
Salary/Rate: Competitive + benefits
Job Details: Based in Cambridgeshire, this
company is looking for a bright, hands-on
Senior ASIC Verification Engineer to work
within their world-leading processor division.
This exciting and varied role requires both
hardware and software verification skills, as
well as the ability to work enthusiastically as
part of a team. Excellent written and verbal
communication is essential.
The ideal candidate will have several years
proven experience of verifying complex designs,
using Specman e or SystemVerilog, and the
ability to produce and review test bench
specifications. The successful candidate will be
familiar with RISC Microprocessor systems and
FPGA/Emulation flows, and be confident in
their ability to analyse multiple verification
metrics and have experience in Perl, Python or
other Scripting languages. Excellent team
leadership opportunities and career progression.
For full details online
enter reference: JSJ17222
8 November 2011 25 www.newelectronics.co.uk
Communication breakdown
Technology Watch Introduction
Theres a lot of talk about how
the internet is running out of
addresses and how IPv6 will
solve the problem. But the web
is still working well, despite the
pressure.
As an extension of its
commitment to providing top
quality product, Digi-Key is
pleased to partner with New
Electronics to bring UK
engineers more about why
IPv4 isnt likely to be replaced
any time soon.
Mark Larson,
President, Digi-Key
I
f you had to ask how you would go about redefining the protocols
that underpin the internet, it would be hard to resist answering: I
wouldnt start from here if I were you.
Even as the ubiquitous network morphed from its role as the
military-oriented Arpanet into the information superhighway, its
digital underpinnings were beginning to creak.
In hindsight, although the consultancy that created the original
protocol set played a crucial role, you probably would not reward
Bolt, Beranek and Newman (BBN) with a block of 16 million internet
protocol (IP) addresses. But BBN joined major corporations and
military contractors, such as IBM, in the exclusive club of the
internets superusers.
From todays perspective, it might have been more sensible to
allocate a large chunk of those addresses to other people rather
than restrict countries outside the west to less than 20% of the
available IP space.
What is surprising is how resilient the system is, given that the
communications industry has been piling on the sticking plasters to
try to stop internet growth from grinding to a halt. In doing so, the
internet is not so much a network of fully paid-up peers, rather, its a
massive tree, with most users sitting in the roots behind not just
one gateway but several.
The arrangement creates numerous security headaches. And yet
it all works well enough that it is hard to work out just when the
world will be ready to go to its often-promised next generation IP
version 6. It would probably be a safer bet to put a time on when the
front-end circuitry of RF communications systems will go to fully
digitally oriented wideband amplifers and receivers: the focus of the
other Technology Watch in this issue.
Graham Pitcher,
Editor, New Electronics
As an extension of its commitment to providing top quality product, Digi-Key is pleased to partner
with New Electronics to provide relevant, useful information to UK engineers.
P
ractically every year, networking
experts claim the internet is about
to run out of room, but the truth is
the internet has been close to running out
of room since the mid 1990s.
In the wake of the Netscape-driven
stock bubble, the Internet Engineering
Task Force (IETF) put forward a couple of
sticking plasters meant to tide the
industry over until it could finally move on
with a wholesale upgrade from Internet
Protocol version 4 (IPv4) to IPv6 or, as it
was when it was proposed in 1994, IP Next
Generation (the reinvented Star Trek series
was drawing to a close that year).
If you were wondering whatever
happened to IPv5, it never existed at
least technically. The version number
comes from one of the fields of the IP
packet header. IPv4, naturally, used the
number 4. But, in the 1970s, companies
supplying internet hardware trialled a
rather different form of IP, one aimed at
passing voice, video and real-time
simulation data all useful for distributed
simulations of battlefield activities. To
mark these packets as being different
from regular IP packets, they chose to use
5 in the version-number field. As it did not
make sense to call IPng version 5 and use
the number 6 to identify the packets, the
IETF decided to move straight from IPv4 to
IPv6. For a brief period, it was going to IPv7,
until the IETF established that no-one had
successfully snaffled the ID 6 during the
previous 20 years.
IPv4 has the capacity to handle
4billion addresses, in theory. Even before
you consider the possibility of giving
each toaster, fridge and light bulb its own
IP address, simply totting up the global
population quickly reveals the heart of
the problem that was spotted more than
15 years ago. There is a shortfall of more
use by the military and its suppliers, no-
one considered the possibility of running
out of addresses, even though they were
handed out in a way in which, from todays
perspective, seemed to be very inefficient.
Users fell into one of five classes, from
A to E, although only A to C were allocated
to a significant degree. Class A users sat at
the top of the pile with a 24bit address
block. This corresponds to the first
number you see in an IP address using the
familiar dot notation such as 10.0.0.1
and gives the user more than 16million
addresses to work with. Class B provides a
20bit block, or 1million addresses, with
Class C supporting 65,000 addresses
using a 16bit block.
Home users will be familiar with a Class
C allocation: many home routers are
preconfigured to work within the
192.168.0.0 to 192.168.255.255 range for
the local side of the network.
Classless internet domain routing
(CIDR) effectively scrapped this system,
than 2.5billion if everybody on the planet
demanded just one IP address. Not only
that, IP addresses are not distributed
evenly; because of the way in which IP
addresses were handed out in the early
days, North America lays claim to 60% of
available addresses. Only 20% are
available to those outside the Western
nations.
With its 128bit address field, instead of
one with just 32bits, IPv6 would sweep
aside any concerns over available
addresses. With its 300 undecillion
(3 x 10
38
) IDs, you could give an internet
address to practically anything (see fig 1).
But the shift to IPv6 has not happened.
Given the restrictions on IPv4
addresses, it seems incredible that the
internet has grown the way it has for 15
years without grinding to a halt. But the
sticking plasters have proven far more
effective than anyone anticipated when
they were first introduced.
When the internet was cordoned off for
8 November 2011 26 www.newelectronics.co.uk
Fig 1: The IPv6 header
has a simpler structure
than its predecessor
but crucially extends
the source and
destination address
fields to 64bit
The internets patch job
Fig 1: Comparison of IPv4 and IPv6 headers
Destination address
Source address
Flow label
Type of
service
IHL Version
Identification Flags Fragment offset
Header checksum Protocol Time to level
Source address
Destination address
Payload length Next header Hop limit Traffic class Version
IPv4 header
IPv6 header
63 60 56 52 48 44 40 36
31 28 24 20 16 12 8 4 0
32 28 24 20 16 12 8 4 0
making it possible to define blocks in
terms of the number of address bits
needed. It is still possible, in principle, to
have a Class A-sized block, but it is defined
as the number of bits that span a blocks
range, appended to the end of the starting
address. The fewer bits in the CIDR block,
the bigger the number of hosts. IBMs
Class A-sized block, for example, is
9.0.0.0/8. A network of 1024 hosts
equivalent to four C blocks would be
defined as a /22 CIDR range.
Home-based internet users will be
familiar with the other IP address fix:
network address translation (NAT).
Although some home users can rent fixed
IP addresses from their internet service
provider (ISP), most make do with a
floating address: allocated by the ISP
when the DSL modem logs into the
network. Even those with a fixed address
will generally connect more than one
computer to the internet through this
single IP address (see fig 2).
Many organisations will have
hundreds, maybe thousands, of machines
sitting behind a NAT server so traffic, as
seen by routers on the internet, seems to
come from just one machine. Some
countries, starved of IP addresses, even
force companies to share a single address
behind a NAT server.
Managing the mapping between local
and internet addresses looks
straightforward, but is fraught with
difficulties: the NAT server has to
maintain a table of ingoing and outgoing
addresses so it can direct packets to the
right machines.
IP itself is a stateless protocol. Every
packet is a single entity with no data about
packets that arrive before or after it in a
stream; the transmission of that
information is left to higher-layer
protocols. These, in turn, provide the
mechanism for a NAT server to determine
how packets sent to the same IP address
can be distributed to multiple machines
on the local side of the network (see fig 3).
The Transmission Control Protocol
(TCP), which is normally carried by IP
packets, provides a stateful connection.
The TCP protocol takes care of setting up,
controlling and tearing down
connections using a series of packets
carried using IP. The User Datagram
Protocol (UDP), on the other hand,
provides a similar service to IP, making it
possible for different applications on a
machine to send packets out to the
network using the same IP address. The
operating system can direct the
incoming packets to the right application
by monitoring which application is using
which UDP port number.
The process has been cited as one of
the mechanisms for shortening the
battery life of portable devices: without
special provision, they have to keep
sending packets in order to maintain a
connection on the internet, allowing mail
servers to push messages to them.
The NAT server maintains as little state
information as it can. Until a computer on
the local side tries to talk to a machine on
the internet, the NAT server has no idea it
exists. But when it receives the packet,
the server inspects the contents, adds the
8 November 2011 27 www.newelectronics.co.uk
Chris Edwards says it will be a brave person who
bets on IPv6 being implemented any time soon
Fig 2: Clients may
connect to the core
internet directly or,
potentially, multiple
levels of network
address translation
Technology Watch Internet Protocols
ISP private
network
Main internet
(global IP address realm)
Global IP
addresses
Home
network
Private IP
addresses
Home
network
Home
network
Home NAT
Home NAT Home NAT
ISP deployed NAT
Fig 2: How internet communication works
local IP address of the sending machine to
its translation table, then reformats the
packet to contain the IP address of the NAT
server itself. By taking a note of which TCP
or UDP ports the packet uses, the NAT
server can trace back to the original
sender any reply from the remote internet
machine. Because the NAT server has no
knowledge of clients on the local side until
they start sending data, it cannot relay
unsolicited packets from the internet side
to local machines unless they have
initiated contact already. This has become
a key security feature for many users. If a
machine on the local side cannot be
found, it is extremely hard to hack.
The problem for mobile devices comes
from the need to keep the table entries
alive in the NAT server. Normally, UDP
translations are only stored for a few
minutes; those for TCP can last up to an
hour. In many cases, the short-lived
nature of UDP translation is not
problematic. UDP/IP is used for voice-over-
demanding high-frequency keepalives.
Skype and other VoIP systems use a
method called hole punching to make
machines on the local side of NAT server
visible to internet-based computers. The
technique works by having clients
maintain long-term connections with the
Skype server that relays call requests
from one user to another. If both users are
behind NAT firewalls, they cannot get in
touch with each other.
Hole punching (see fig 4) works by
having the Skype server tell one client
that an internet user at a given address
is trying to call. The receiver then
punches a hole in the firewall by
sending a packet to that machine, even
though it has no chance of reaching the
initiator directly. However, when the
Skype server tells the calling machine
about the receivers location, the caller
then punches a corresponding hole in
its firewall with the receivers known IP
address and port. Because the Skype
server has coordinated the requests with
what are now known ports and address,
the NAT firewalls will direct the Skype
packets to the correct destinations.
Because hole punching allows local
machines to set up connections that listen
for packets from callers, it is not popular
with network administrators. It provides a
method of attack that can be exploited by
hackers on machines sitting behind a
firewall. A similar problem arises for users
of peer-to-peer networks.
Many machines sitting behind NAT
servers and firewalls will be used to surf
to information stored on servers on the
world wide web, and these have their own
IP address conservation mechanism.
Before the release of version 1.1 of the
HyperText Transfer Protocol (HTTP), you
could only associate one website address
IP and gaming protocols, where activity is
often quite high for relatively short periods
and once machines have stopped
communicating they are unlikely to start
again without initiating a new connection.
These protocols also tend to use UDP
because they can tolerate some packet
loss and it is better, for latency reasons, to
compensate for lost data using forward
error correction than to force the remote
machine to resend packets.
However, UDP is also used for some
IP tunnelling protocols that implement
virtual private networks. In this
situation, the virtual network connection
needs to be sustained for some time,
with bursts of activity separated by
relatively long silences. To stop the
channel from being lost when the NAT
server cleans up, the mobile device has
to keep sending keepalive packets to
the virtual network. Mobile companies
have come up with techniques that try to
maintain UDP connections without
8 November 2011 28 www.newelectronics.co.uk
IP itself is a stateless protocol. Every packet is a single
entity with no data about packets that arrive before or
after it in a stream; the transmission of that
information is left to higher-layer protocols.
Fig 3: A network address translator distributes packets to local machines
A connection
from the internet
with no NAT mapping
Network
address
translator
2
2
5
.1
2
6
.
5
6
.
5
9
:
1
1
4
225.126.56.59:113 192.168.0.12
1
9
2
.1
6
8
.
0
.
4
1
9
2
.1
6
8
.
0
.1
0
225.126.56.59:80
with each IP address. Because companies
wanted to buy lots of vanity addresses
and to support advertising campaigns,
there were legitimate concerns that
addresses would run out even more
quickly than expected. HTTP 1.1 made it
possible to map multiple uniform resource
identifiers (URIs) onto one IP address
leaving servers behind a machine sitting
on that address to determine which one
should handle a request.
IPv6 potentially opens up
possibilities for all forms of networking,
but if the past 15 years are anything to
go by, and thanks to address
conservation, it would be a brave person
who bets on IPv6 coming into
widespread use anytime soon. The
addresses can be applied to anything
with a network interface, letting you
control the light bulbs and thermostat in
your home from anywhere.
Unfortunately, it also means the end of
NAT-based security, so it could mean
letting anyone have access to your
homes electronics.
NAT-based security is hardly perfect, as
hole-punching demonstrates, but IPv6
provides the possibility of adding end-to-
end security by enabling IPsec encryption
by default. However, this, in turn, demands
that every peer on the network has
effective protection against hacking and
worm attacks, which is going to be difficult
for highly resource-restricted clients
based on simple microcontrollers.
The future for the traditional
protocols that sit on top of IP is even
more assured. Researchers keep coming
up with new protocols aimed at
transferring video or replicating the
telephony system on an IP
infrastructure, but TCP and UDP remain
the primary vehicles for transferring
data, despite their flaws. For example,
TCP is vulnerable to denial-of-service
attacks through a process called SYN
flooding, in which an attacker send
multiple startup messages to a server
then disappears. The series of half-open
connections this creates can starve
others users from access to the server.
Another problem is that some
applications do not fit well on top of either
TCP or UDP. Using UDP for relaying
telephone calls over IP is not ideal and
prompted the IETF to create a new version
of TCP called the stream control
transmission protocol (SCTP).
SCTP makes it possible to send
streams of data in independent channels
to a client. For example, text and pictures
for a web page can go in different streams.
This helped overcome a problem with TCP,
where one stream can block others
because they have to be sent one after the
other and transmission may stall for the
entire sequence if one packet gets lost
and needs to be retransmitted.
In common with other improved
protocols for the internet, SCTP is not used
widely, even though implementations
exist for a variety of operating systems. Its
main application today is as the protocol
that relays telephony signaling, such as
SS7 packets, over IP networks. The core
problem is that applications need to be
rewritten to take account of SCTP its not
possible to build support into operating
systems such that the transport protocol
can be switched transparently from TCP to
SCTP. And without a growing list of
applications demanding support, no-one is
in a hurry to build SCTP handling into
internet equipment.
Despite meetings, proposals and
promises, the internet is likely to carry on
working in the way that it has for some
time. Change will only come when the
sticking plasters cannot stop the bleeding
and governments and corporations come
together to order major surgery.
8 November 2011 29 www.newelectronics.co.uk
By telling clients to send packets that the system
knows will be rejected by the destination gateway,
protocols such as Skype allow holes to be punched
in the NATs firewall
Technology Watch Internet Protocols
Fig 4: Hole punching makes machines on the local side of NAT server visible
Skype
server
T
he trend in RF design over the past
20 years has clearly been towards
greater levels of digitisation in
order to squeeze more capacity out of
some of the most congested parts of the
radio spectrum. At the same time,
operators want to reduce power
consumption in their basestations and
hubs while users want better battery life
out of their handsets. These factors are
uniting to force big changes on the
analogue front-end.
But as you get closer to the antenna,
it becomes harder to simply transfer
more work onto a digital signal
processor (DSP). Moving towards more
digitally focused protocols tends to
reduce efficiency, rather than increase
it, when you look at devices such as
power amplifiers.
In the shift from 2G protocols such as
GSM to the code-division multiple access
(CDMA) schemes of 3G, the modulation
techniques became more complex. GSM
uses the Gaussian minimum shift
keying technique, which shifts the
phase of a constant-amplitude sine
wave backwards and forwards to relay
data. The developers of the standard
adopted this technique because it allows
a linear power amplifier to operate in its
most efficient, saturated region, helping
to maintain battery life.
The demand for high data rates that
came with 3G pushed protocol designers
into adopting more complex modulation
schemes, such as quadrature amplitude
modulation, in which phase and
amplitude are altered. To improve the
accuracy of the signal output, the power
amplifier has to back off further into its
linear region, which is less power-
efficient than working close to the
distinctly non-linear saturated region
additional energy for modulation peaks.
Because the main amplifier can be run
closer to saturation, overall efficiency
increases, but the secondary amplifier
provides control over the signal.
There is no need to stop at two
amplifiers. Engineers from Pohang
University of Science and Technology in
Korea have suggested that, under ideal
conditions, a five-way design could be
useful, although their work has so far
resulted in just a three-way
implementation tuned for wideband-CDMA
protocols. At the moment, peak-to-
average ratios are not so large that adding
more amplifiers makes sense. But future
standards could make more complex
Doherty-based designs viable (see fig 2).
However, there is a second issue that
complicates the design of power
amplifiers. Some 14 individual frequency
that would be used traditionally. Digital
pre-distortion can improve the situation
by making it possible to get closer to the
saturated region, but its not a complete
solution to the problem. And the
situation is not improving for the power-
amplifier designer.
As modulation schemes become more
complex, the peak to average ratio
worsens. In LTE, peak-to-average values
are close to 10dB, compared to 7dB for
3G UMTS and 3dB for GSM (see fig 1).
One answer was to look back in
history at a power-amplifier design
developed before the Second World War.
In a concept demonstrated in May 1936
using vacuum tubes by Bell Labs
researcher William Doherty, two
amplifiers are used in parallel: one
supplies the bulk of the power at a
constant level; the second provides
8 November 2011 30 www.newelectronics.co.uk
Fig 1: Peak-to-average
power ratios have
increased dramatically
The analogue barrier
Fig 1: Peak to average power ratios are increasing with protocol complexity
Back off
Low peak
to average
ratio (GSM)
High peak
to average
ratio (LTE)
Back off
V
sat
V
out
V
out
V
sat
V
in
V
in
bands are available for LTE, with different
regions and countries picking and
choosing from the list to fit in with
existing cellular, TV and radio networks.
Concepts such as white-space radio are
pushing radio designers to make their
circuitry even more flexible: sniff the
airwaves to find free space and then
move into it, leaving that frequency band
when too many other transmitters show
up. For the most part, a more flexible radio
means using software to control a set of
standard hardware building blocks, rather
than tuning a hardware circuit to a
specific RF range.
Close to the baseband, the situation is
relatively simple: use more DSP. But as
the signal reaches the antenna, things
get more complicated. Power amplifiers
are optimised for operation in fairly
narrow frequency bands and that
includes Doherty architectures, although
the multi-way version help deal with the
high crest factors of the newer wideband
protocols such as LTE. It is possible to
design amplifiers that have a wider
frequency range, but these suffer in
terms of power efficiency.
Another blast from the past offers a
way to claw back some efficiency. First
described a year after Dohertys proposal,
envelope tracking continually adjusts the
supply voltage to the power amplifier so
that it gets just enough to deliver the
required signal (see fig 3). This reduces
the amount of energy dissipated as heat
from the notional over-voltage in a
conventional amplifier design. The
efficiency improvement from envelope
tracking tends to increase the further the
amplifier operates from its frequency
sweet-spot.
But the circuitry is not entirely
straightforward to implement and
building a power-supply modulator
accurate enough for the job takes some
effort. One approach, used by startup
Nujira, is to predict the amplitude of the
RF output from the modulated signal
and then apply a function that can be
used to derive what the drain voltage
should be. However, the calculation
imposes a delay so the trick is to delay
the RF signal on its way to the amplifier,
so voltage and output match up.
In general, envelope tracking is being
implemented in handsets more quickly
than with the somewhat larger power
amplifiers of basestations, where the
tracking is harder to implement efficiently.
For the basestations, changes to more
exotic processes are showing greater
promise. Gallium arsenide has gradually
given way to silicon laterally diffused
metal-on-semiconductor (LDMOS), largely
because the silicon devices can handle
very high power outputs. However, gallium
nitride and silicon carbide may supplant
LDMOS in the future, thanks to their ability
to work efficiently at high temperature,
reducing the need for active cooling.
DSP can help; power amplifiers are
now being designed to handle a level of
predistortion, intended to compensate for
frequency-dependent losses in the
amplifier subsystem. It increases the
amount of compute horsepower needed
in the RF front-end, but this generally
needs a lot less energy than that wasted
by the power amplifier itself if
predistortion is not used.
There is a similar problem on the
receive side. In principle, in a software
radio, a single A/D converter feeds into a
high-speed DSP that uses digital
demodulation to pull the signal of interest
8 November 2011 31 www.newelectronics.co.uk
Chris Edwards looks at some schemes being implemented
to boost mobile communications efficiency
Fig 2: Multi-way
Doherty amplifiers
improve efficiency as
the required back-off
range increases
Technology Watch RF Front Ends
Fig 2: The ideal efficiencies of N-way Doherty amplifier
80
70
60
50
40
30
20
10
0
E
f
f
i
c
i
e
n
c
y
(
%
)
20 15 10 5 0
Output back-off (dB)
Class B
Class B
2
-w
a
y
3
-
w
a
y
4
-
w
a
y
out of the rest of the radio soup. In reality,
neither the A/D converters nor the DSPs
available today can offer the level of
horsepower needed unless you consider
the arrays in battlefield military radios that
sit in the back of jeeps and helicopters.
The problem for the A/D converter is
not so much the sampling rate there are
designs that can work at rates of more
than 1GHz but the resolution. The
demodulation process requires very high
dynamic ranges to let the computer find
the buried signal.
The increase in DSP power allows
some use of wideband techniques,
where an A/D converter uses
subsampling to extract a range of
frequency bands. However, some radio
standards have stringent demands that
make it hard to realise this architecture.
GSM, for example, demands that a
handset be able to pluck a weak signal
from between channels that may
structure to behave as many different
antennas. This design does work, but that
is where the good news ends. In tests,
researchers from the University of
Brasilia found performance drops with
frequency they work best in RF bands
below most of those used in digital
communications. As a result, the fractal
antenna may be useful where DSP
horsepower and amplification technology
can overcome inefficiencies and where
the ability to cope with arbitrary
frequency bands is vital. For everything
else, you need a different approach.
With careful design, it is possible to
create an antenna structure that can
handle multiple frequencies
simultaneously, but the effectiveness of
the design depends on which radios will
be used simultaneously. As a result, the
multimode antenna has to be custom-
designed for every use case.
Moving away from an entirely passive
design can help improve performance. It
is possible to put transistors or diodes
into the antenna itself to switch between
different radio modes. Another approach
is to stick with a more generic antenna
design and then tune electrically the
matching network used to convey the
signal to the receiver circuitry. Although
trickier to design, tunable antennas
generally perform better than tunable
matching networks. If the tuning range is
pushed too far, the matching network can
begin to incur significant signal losses.
The result is likely to be antennas that
themselves resemble printed circuit
boards, complete with embedded
components. But, even with those
changes, the fundamental design of the
antenna, as with the power amplifier and
filters, remains rooted in traditional
techniques.
contain some very noisy transmitters.
As a result, an array of fixed surface-
acoustic wave (SAW) filters often works
out to be more effective than using a
more digitally-oriented approach.
The antenna is, potentially, where
some of the biggest changes over the
next few years will be seen. Handsets
now need to support many different radio
standards, from Bluetooth to a variety of
LTE-compatible bands: can one antenna
handle all of those? The answer is,
surprisingly, a cautious yes. But, as with
filters, the answer is not through using a
wideband antenna, but through
integrating many different antennas into
one package.
There have been attempts to develop a
true wideband antenna: fractal designs
showed some promise about ten years
ago. As the name suggests, these
antennas exploit the self-similar nature of
fractal geometry to allow one printed
8 November 2011 32 www.newelectronics.co.uk
Handsets now need to support many
different radio standards: can one antenna
handle all of those? The answer is,
surprisingly, a cautious yes.
Technology Watch RF Front Ends
Fig 3: How envelope tracking improves efficiency
Dissipated as heat Fixed supply voltage Dissipated as heat
Supply voltage
Transmitted Transmitted
Envelope tracking
continually adjusts the
supply voltage to the
power amplifier so that
it gets just enough to
deliver the required
signal
www.xilinx.com/7
Copyright 2011. Xilinx, Inc. XILINX, the Xilinx logo, Artix, ISE, Kintex, Virtex, and other designated brands included herein are trademarks of Xilinx in the United States and other countries.
All other trademarks are the property of their respective owners.
Twice the performance. Half the power.
Innovate without compromise with the Xilinx 7 series FPGAs.
The new 7 series devices are built on the only unied architecture
to give you a full range of options for making your concepts
a reality. Meet your needs for higher performance and lower
power. Speed up your development time with next-generation
ISE
Demo CD,
Product Information and Sample.
Call +31 (0) 35 5380684 or visit www.bergquistcompany.com/screenprint
B E R GQUI S T HI GH- P E R F OR MANCE S I L I CONE L I QUI D ADHE S I V E S
CureYour
Thermal Issues
InASingle Pass.
Liqui-Bond