Thursday, 6 August 2015

ARM’s big LITTLE Technology

ARM’s big LITTLE Technology:

Most embedded systems vie for saving the power and for this purpose a lot of efforts are needed  and technology need to be used. So ARM processors are very widely being used in mobile devices nowadays.

ARM stands for Advanced RISC Machine.
RISC stands for Reduced Instruction Set Computing.

A characteristic feature of ARM processors is their low electric power consumption, which makes them particularly suitable for use in portable devices. In fact, almost all modern mobile phones and personal digital assistants contain ARM CPUs, making them the most widely used 32-bit microprocessor family in the world. Today ARMs account for over 75% of all 32-bit embedded CPUs.

ARM offers several microprocessor core designs which are used in such applications as smartphones and tablets.

Cores for 32-bit architectures include Cortex-A15, Cortex-A12, Cortex-A17, Cortex-A9, Cortex-A8, Cortex-A7 and Cortex-A5, and older "Classic ARM Processors", as well as variant architectures for microcontrollers that include these cores: ARM Cortex-R7, ARM Cortex-R5, ARM Cortex-R4, ARM Cortex-M4, ARM Cortex-M3, ARM Cortex-M1, ARM Cortex-M0+, and ARM Cortex-M0.

The latest technology introduced by ARM is the big LITTLE technology.

ARM® big.LITTLE™ processing is a power-optimization technology where high-performance ARM CPU cores are combined with the most efficient ARM CPU cores to deliver peak-performance capacity, higher sustained performance, and increased parallel processing performance, at significantly lower average power. The latest big.LITTLE software and platforms can save 75% of CPU energy in low to moderate performance scenarios, and can increase performance by 40% in highly threaded workloads. The underlying big.LITTLE software, big.LITTLE MP, automatically and seamlessly moves workloads to the appropriate CPU core based on performance needs. ARM big.LITTLE technology enables mobile SoCs to be designed for new levels of peak performance, in the same all-day battery life users expect.


The performance demanded by smartphones and tablets is increasing at a much faster rate than technology improvements in battery capacity and the advances in semiconductor process nodes. The need for higher performance directly conflicts with the desire for longer battery life. The solution to this lies beyond process technology and traditional power management and requires further innovation in mobile SoC design. big.LITTLE is one of many power management technologies employed by ARM to save power in mobile SoCs. It works in tandem with Dynamic Voltage and Frequency Scaling (DVFS), clock gating, power gating, retention modes, and thermal management to deliver a full set of power control for the SoC.

big.LITTLE technology takes advantage of the dynamic usage pattern for smartphones and tablets. Periods of high processing intensity tasks such as initial web page rendering and game physics calculation alternate with typically longer periods of low processing intensity tasks such as scrolling or reading a web page, waiting for user input in a game, and lighter weight tasks like texting, e-mail and audio. The graph below (Fig.1) shows the CPU residency at various DVFS frequency states in a big.LITTLESoC, with all the relevant power management techniques in operation. It shows the usage of the big CPU cores in burst mode (i.e. for short durations at peak frequency) while the majority of runtime is managed by LITTLE cores at moderate operating frequencies.

Innovative power-saving techniques are required to sustain the pace of innovation in mobile through performance increases in the same power footprint. Many of the mobile use cases exhibit behavior like that shown in the graph above, presenting an ideal opportunity for big.LITTLE technology to save power while also delivering peak-performance in modern mobile devices.

big.LITTLE Processing – How does it work?

The high performance and high efficiency CPU clusters are connected through a cache coherent interconnect fabric such as the ARM CoreLink™ CCI-400.This hardware coherency enables the same view of the memory to both the big and LITTLE CPU clusters. The processors look like one multicore CPU to the operating system (OS). User space software on a big.LITTLESoC is identical to the software that would run on a standard Symmetrical Multi-Processing (SMP) CPU.

How does the work get scheduled to the right processor?

Global Task Scheduling (GTS) gives the OS awareness of the big and LITTLE processors, and the ability to schedule individual threads of execution on the appropriate CPU core based on dynamic run-time behavior. ARM has developed a kernel space patch set based on GTS called big.LITTLE MP that keeps track of load history as each thread runs, and uses the history to anticipate the performance needs of the thread next time it runs.

Hardware Requirements

For a big.LITTLE system to work seamlessly with software, the CPU subsystem must be fully cache coherent, and the big and LITTLE CPU cores must be fully architecturally identical; they must run all the same instructions and support the same extensions such as virtualization, large physical addressing and so on.

Article By,

Sphoorthy Engineering College

Wednesday, 5 August 2015

Nonlinear Squeezing Time–frequency transform for weak signal detection

Conventional time–frequency analysis methods can characterize the time–frequency pattern of multi-component Non stationary signals. However, it is difficult to detect weak components hidden in complex signals because the time–frequency representation is influenced by the signal amplitude. In this paper, a novel algorithm called Nonlinear Squeezing Time–Frequency Transform (NSTFT) is proposed to characterize the time–frequency pattern of multi-component non stationary signals. Most importantly, theoretical analysis shows that the NSTFT method is independent of the signal amplitude and is only relevant to the signal phase, thus it can be used for weak signal detection. Moreover, an improved ridge detection algorithm is proposed in this paper for instantaneous frequency estimation. The experiments on simulated and real-world signals show that the NSTFT method can effectively detect weak components in complex signals, and the comparison study with some other time–frequency analysis methods also shows the advantages of the NSTFT method in weak signal detection.+


• Nonlinear squeezing time–frequency transform (NSTFT) is proposed for weak signal detection.
• Theoretical analysis shows that the NSTFT is independent of the signal amplitude and is only relevant to the signal phase.
• An improved ridge detection algorithm is proposed for IF estimation.
• Experiments and comparison demonstrate the effectiveness of the NSTFT in weak signal detection.

Time–frequency analysis; 
Weak signal detection; 
Nonlinear squeezing time–frequency transform; 
Instantaneous frequency; 
Synchro squeezing transform


Article By -

ECE Department

Asst. Professor

Sphoorthy Engineering College

Sphoorthy Engineering College


Orthogonal Frequency Division Multiplexing

Orthogonal frequency-division multiplexing (OFDM) is a method of encoding digital data on multiple carrier frequencies. OFDM has developed into a popular scheme for wide band digital communications, used in applications such as digital television and audio broadcasting, DSL Internet access, wireless networks, powerline networks, and 4G mobile communications.

OFDM is a frequency division multiplexing (FDM) scheme used as a digital multi-carrier modulation Technique. A large number of closely spaced Orthogonal sub carrier signals are used to carry data on several parallel data streams or channels. Each sub-carrier is modulated with a conventional modulation scheme such as Quadrature amplitude modulation (QAM) or Phase shift Keying(PSK) at a low symbol rate, maintaining total data rates similar to conventional single carrier modulation schemes in the same bandwidth.

The primary advantage of OFDM over single-carrier schemes is its ability to cope with severe channel conditions (for example, attenuation of high frequencies in a long copper wire, narrowband interference and frequency-selective fading due to multipath without complex equalization filters. Channel equlization is simplified because OFDM may be viewed as using many slowly modulated narrowband signals rather than one rapidly modulated wideband signal. The low symbol rate makes the use of a guard interval between symbols affordable, making it possible to eliminate intersymbol interference (ISI) and utilize echoes and time-spreading (on analogue TV these are visible as ghosting and blurring, respectively) to achieve a diversity gain, i.e. a signal to noise ratio improvement. This mechanism also facilitates the design of single frequency networks (SFNs), where several adjacent transmitters send the same signal simultaneously at the same frequency, as the signals from multiple distant transmitters may be combined constructively, rather than interfering as would typically occur in a traditional single-carrier system.

High spectral efficiency as compared to other double sideband modulation schemes, spread spectrum, etc.
Can easily adapt to severe channel conditions without complex time-domain equalization.
Robust against narrow-band co-channel interference.
Robust against  ISI and fading caused by multipath propagation.
Efficient implementation using fast fourier transform
Low sensitivity to time synchronization errors.
Tuned sub-channel receiver filters are not required (unlike conventional FDM

Sensitive to Doppler shift
Sensitive to frequency synchronization problems.
High peak-to-average power ratio requiring linear transmitter circuitry, which suffers from poor power efficiency.
Loss of efficiency caused by cyclic prefix 

Article By:
ECE Department
Asst. Professor 
Sphoorthy Engineering College

Sphoorthy Engineering College


Invention of the first IC (Integrated Circuit) in the form of a Flip Flop by Jack Kilby in 1958, our ability to pack more and more transistors onto a single chip has doubled roughly every 18 months, in accordance with the Moore’s Law. Such exponential development had never been seen in any other field and it still continues to be a major area of research work.

By mid eighties, the transistor count on a single chip had already exceeded 1000 and hence came the age of Very Large Scale Integration or VLSI is the process of creating an integrated circuit (IC) by combining thousands of transistors into a single chip. VLSI began in the 1970s when complex semiconductor and communication technologies were being developed.Though many improvements have been made and the transistor count is still rising, further names of generations like ULSI are generally avoided. It was during this time when TTL lost the battle to MOS family owing to the same problems that had pushed vacuum tubes into negligence, power dissipation and the limit it imposed on the number of gates that could be placed on a single die.

The second age of Integrated Circuits revolution started with the introduction of the first microprocessor, the 4004 by Intel in 1972 and the 8080 in 1974. Today many companies like Texas Instruments, Infineon,Alliance Semiconductors, Cadence, Synopsys, Celox Networks, Cisco, Micron Tech, National Semiconductors, ST Microelectronics, Qualcomm, Lucent, Mentor Graphics, Analog Devices, Intel, Philips, Motorola and many other firms have been established and are dedicated to the various fields in "VLSI" like Programmable Logic Devices, Hardware Descriptive Languages, Design tools, Embedded systems.

Article By:
V Poornima
Asst. Professor
ECE Department
Sphoorthy Engineering College

Sphoorthy Engineering College


Sampling gates are transmission circuits in which the output is the exact replica of the input during a selected time interval and is zero otherwise. the time interval for the transmission of the signal is selected by an externally impressed signal called the gating signal which is usually rectangular in wave shape. the sampling gates are also called transmission gates or selection circuits.A gate circuit that extracts information from the input waveform only when activated by a selector pulse.


sampling gates may be 1. unidirectional sampling gates 2. bidirectional sampling gates

Difference between Logic gate and Sampling Gates:

A logic gate is a computer circuit with several inputs but only one output that can be activated by a particular combinations of inputs, a diagram that shows the major gates can be found here. 

An example would be if you have 2 wires going to an AND gate and only 1 wire has a current (is on, 1) and the other one is 0, then your output would be 0 as you need wire 1 AND wire 2 to have a current, for the output to be 1, on. 

A sampling gate, on the other hand is a circuit that produces an output only when first activated by a preliminary pulse. So if you have a current going through a wire and through a sampling gate, your output would be 0, unless you program the sampling gate to let the current through. 

Unidirectional sampling gate:

A unidirectional gate can transmit either positive or negative pulses (or signals) to the output. It means that this gate transmits pulses of only one polarity to the output. The signal to be transmitted to the output is the input signal. This input signal is transmitted to the output only when the control signal enables the gate circuit. Therefore, we discuss two types of unidirectional diode gates, namely, unidirectional diode gates that transmit positive pulses and unidirectional diode gates that transmit negative pulses.

Bidirectional sampling gate:

A bidirectional gate can transmit both positive and negative pulses (or signals) to the output. It means that this gate transmits pulses allow two polarities to the output. The signal to be transmitted to the output is the input signal. This input signal is transmitted to the output only when the control signal enables the gate circuit. Therefore, we discuss two types of bidirectional diode gates bidirectional transistor gates that transmit positive pulses and transmit negative pulses.

Applications of sampling gates:

Sampling gates find applications in many circuits. Sampling gates are used in multiplexers, D/A converters, chopper stabilized amplifiers, sampling scopes, etc. 

Hybrid method for designing digital Butterworth filters


A procedure for designing digital Butterworth filters is proposed. The procedure determines the denominator and the numerator of the filter transfer function based on the positions of the poles in the s-plane and zeros in the z-plane, respectively, and calculates the gain factor using a maximum point normalization method. In contrast to some conventional algorithms, the presented procedure is much simpler by directly obtaining the filter with 3-dB frequencies. This makes the presented algorithm a useful tool for determining the boundaries in electronic or communication systems’ frequency responses. Moreover, the proposed algorithm is compatible with high-order transformations which are the limitations of general pole-zero placement techniques. The proposed method is illustrated by the examples of designing the low-pass, high-pass, band-pass, and band-stop filter.


1. The proposed algorithm is for designing digital IIR filters.
2. The presented procedure is much simpler by directly finding 3-dB frequencies.
3. The resulting filters strictly follow the desired specifications.
 4. The proposed algorithm is compatible with high-order transformations.

Article By:




Sphoorthy Engineering College

CE Amplifier

An amplifier a circuit which amplifies the magnitude of the input signal. Here the concept is the output should be proper replica of the amplified input signal. This  phenomenon is characterised by analysing and studying the parameters of the amplifier. The common emitter is the mostly used amplifier configuration. We use the amplifier circuits where the high voltage gain is required. For both the PNP and NPN transistors CE amplifier circuits the input is applied to the base and output is collected at the collector terminal. The common terminal for both the terminals is the emitter.

parameter characteristics
Voltage gain medium
Current gain medium
Power gain high
Input & output relationshiop 180 degrees
Input resistance medium
Output resistance medium

The input impedance is around 1k ohm. But it various as per the design specifications. The output impedance will be high around 10k ohm. Sometimes the output impedance would be more high. The current gain for the common emitter amplifier is denoted by the greek symbol beta and is defined as the ratio of output collector current to input base current. The change in the input base current results change in the output collector current. Here the operating point is considered as an important parameter, because the circuit works as an amplifier only if the operating point is in the active region. So to maintain the operating point in the active region proper biasing and stabilization techniques are to be used to have stable output. But if the more amount of current flows through the output terminals the output impedance may reduce. Among all the configurations the CE amplifier configuration is the best configuration because this configuration provides better current gain and voltage gain.

Article By:
G Madhu
ECE Department
Sphoorthy Engineering College

Sphoorthy Engineering College