Next Issue
Volume 9, September
Previous Issue
Volume 9, March
 
 

J. Low Power Electron. Appl., Volume 9, Issue 2 (June 2019) – 7 articles

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Select all
Export citation of selected articles as:
12 pages, 1692 KiB  
Concept Paper
ILP Based Power-Aware Test Time Reduction Using On-Chip Clocking in NoC Based SoC
J. Low Power Electron. Appl. 2019, 9(2), 19; https://doi.org/10.3390/jlpea9020019 - 17 Jun 2019
Cited by 1 | Viewed by 5404
Abstract
Network-on-chip (NoC) based system-on-chips (SoC) has been a promising paradigm of core-based systems. It is difficult and challenging to test the individual Intellectual property IP cores of SoC with the constraints of test time and test power. By reusing the on-chip communication network [...] Read more.
Network-on-chip (NoC) based system-on-chips (SoC) has been a promising paradigm of core-based systems. It is difficult and challenging to test the individual Intellectual property IP cores of SoC with the constraints of test time and test power. By reusing the on-chip communication network of NoC for the testing of different cores in SoC, the test time and test cost can be reduced effectively. In this paper, we have proposed a power-aware test scheduling by reusing existing on-chip communication network. On-chip test clock frequencies are used for power efficient test scheduling. In this paper, an integer linear programming (ILP) model is proposed. This model assigns different frequencies to the NoC cores in such a way that it reduces the test time without crossing the power budget. Experimental results on the ITC’02 benchmark SoCs show that the proposed ILP method gives up to 50% reduction in test time compared to the existing method. Full article
Show Figures

Figure 1

25 pages, 2938 KiB  
Article
Aggressive Exclusion of Scan Flip-Flops from Compression Architecture for Better Coverage and Reduced TDV: A Hybrid Approach
J. Low Power Electron. Appl. 2019, 9(2), 18; https://doi.org/10.3390/jlpea9020018 - 29 May 2019
Cited by 1 | Viewed by 7924
Abstract
Scan-based structural testing methods have seen numerous inventions in scan compression techniques to reduce TDV (test data volume) and TAT (test application time). Compression techniques lead to test coverage (TC) loss and test patterns count (TPC) inflation when higher compression ratio is targeted. [...] Read more.
Scan-based structural testing methods have seen numerous inventions in scan compression techniques to reduce TDV (test data volume) and TAT (test application time). Compression techniques lead to test coverage (TC) loss and test patterns count (TPC) inflation when higher compression ratio is targeted. This happens because of the correlation issues introduced by these techniques. To overcome this issue, we propose a new hybrid scan compression technique, the aggressive exclusion (AE) of scan cells from compression for increasing overall TC and reduce TPC. This is achieved by excluding scan cells which contribute to 12% to 43% of overall care bits from compression architecture, placing them in multiple scan chains with dedicated scan-data-in and scan-data-out ports. The selection of scan cells to be excluded from the compression technique is done based on a detailed analysis of the last 95% of the patterns from a pattern set to reduce correlations. Results show improvements in TC of up to 1.33%, and reductions in TPC of up to 77.13%. Full article
Show Figures

Figure 1

16 pages, 875 KiB  
Article
Implementing Adaptive Voltage Over-Scaling: Algorithmic Noise Tolerance vs. Approximate Error Detection
J. Low Power Electron. Appl. 2019, 9(2), 17; https://doi.org/10.3390/jlpea9020017 - 21 Apr 2019
Cited by 2 | Viewed by 6341
Abstract
Adaptive Voltage Over-Scaling can be applied at run-time to reach the best tradeoff between quality of results and energy consumption. This strategy encompasses the concept of timing speculation through some level of approximation. How and on which part of the circuit to implement [...] Read more.
Adaptive Voltage Over-Scaling can be applied at run-time to reach the best tradeoff between quality of results and energy consumption. This strategy encompasses the concept of timing speculation through some level of approximation. How and on which part of the circuit to implement such approximation is an open issue. This work introduces a quantitative comparison between two complementary strategies: Algorithmic Noise Tolerance and Approximate Error Detection. The first implements a timing speculation by means approximate computing, while the latter exploits a more sophisticated approach that is based on the approximation of the error detection mechanism. The aim of this study was to provide both a qualitative and quantitative analysis on two real-life digital circuits mapped onto a state-of-the-art 28-nm CMOS technology. Full article
Show Figures

Figure 1

41 pages, 844 KiB  
Article
Novel Approaches for Efficient Delay-Insensitive Communication
J. Low Power Electron. Appl. 2019, 9(2), 16; https://doi.org/10.3390/jlpea9020016 - 06 Apr 2019
Cited by 3 | Viewed by 6402
Abstract
The increasing complexity and modularity of contemporary systems, paired with increasing parameter variabilities, makes the availability of flexible and robust, yet efficient, module-level interconnections instrumental. Delay-insensitive codes are very attractive in this context. There is considerable literature on this topic that classifies delay-insensitive [...] Read more.
The increasing complexity and modularity of contemporary systems, paired with increasing parameter variabilities, makes the availability of flexible and robust, yet efficient, module-level interconnections instrumental. Delay-insensitive codes are very attractive in this context. There is considerable literature on this topic that classifies delay-insensitive communication channels according to the protocols (return-to-zero versus non-return-to-zero) and with respect to the codes (constant-weight versus systematic), with each solution having its specific pros and cons. From a higher abstraction, however, these protocols and codes represent corner cases of a more comprehensive solution space, and an exploration of this space promises to yield interesting new approaches. This is exactly what we do in this paper. More specifically, we present a novel coding scheme that combines the benefits of constant-weight codes, namely simple completion detection, with those of systematic codes, namely zero-effort decoding. We elaborate an approach for composing efficient “Partially Systematic Constant Weight” codes for a given data word length. In addition, we explore cost-efficient and orphan-free implementations of completion detectors for both, as well as suitable encoders and decoders. With respect to the protocols, we investigate the use of multiple spacers in return-to-zero protocols. We show that having a choice between multiple spacers can be beneficial with respect to energy efficiency. Alternatively, the freedom to choose one of multiple spacers can be leveraged to transfer information, thus turning the original return-to-zero protocol into a (very basic version of a) non-return-to-zero protocol. Again, this intermediate solution can combine benefits from both extremes. For all proposed solutions we provide quantitative comparisons that cover the whole relevant design space. In particular, we derive coding efficiency, power efficiency, as well as area effort for pipelined and non-pipelined communication channels. This not only gives evidence for the benefits and limitations of the presented novel schemes—our hope is that this paper can serve as a reference for designers seeking an optimized delay-insensitive code/protocol/implementation for their specific application. Full article
Show Figures

Figure 1

14 pages, 703 KiB  
Article
Voltage-Controlled Magnetic Anisotropy MeRAM Bit-Cell over Event Transient Effects
J. Low Power Electron. Appl. 2019, 9(2), 15; https://doi.org/10.3390/jlpea9020015 - 05 Apr 2019
Cited by 4 | Viewed by 6691
Abstract
Magnetic tunnel junction (MTJ) with a voltage-controlled magnetic anisotropy (VCMA) effect has been introduced to achieve robust non-volatile writing control with an electric field or a switching voltage. However, continuous technology scaling down makes circuits more susceptible to temporary faults. The reliability of [...] Read more.
Magnetic tunnel junction (MTJ) with a voltage-controlled magnetic anisotropy (VCMA) effect has been introduced to achieve robust non-volatile writing control with an electric field or a switching voltage. However, continuous technology scaling down makes circuits more susceptible to temporary faults. The reliability of VCMA-MTJ-based magnetoelectric random access memory (MeRAM) can be impacted by environmental disturbances because a radiation strike on the access transistor could introduce write and read failures in 1T-1MTJ MeRAM bit-cells. In this work, Single-Event Transient (SET) effects on a VCMA-MTJ-based MeRAM in 28 nm FDSOI CMOS technology are investigated. Results show the minimum SET charge Q c required to reach the access transistor associated with the striking time that can lead to an unsuccessful switch, that is, an error in the writing process (write failure). The synchronism between the fluctuations of the magnetic field in the MTJ free layer and the moment of the write pulse is also analyzed in terms of SET robustness. Moreover, results show that the minimum Q c value can vary more than 100 % depending on the magnetic state of the MTJ and the width of the access transistor. In addition, the most critical time against the SET occurrence may be before or after the write pulse depending on the magnetic state of the MTJ. Full article
Show Figures

Figure 1

10 pages, 4992 KiB  
Article
PEDOT: PSS Thermoelectric Generators Printed on Paper Substrates
J. Low Power Electron. Appl. 2019, 9(2), 14; https://doi.org/10.3390/jlpea9020014 - 30 Mar 2019
Cited by 16 | Viewed by 7266
Abstract
Flexible electronics is a field gathering a growing interest among researchers and companies with widely varying applications, such as organic light emitting diodes, transistors as well as many different sensors. If the circuit should be portable or off-grid, the power sources available are [...] Read more.
Flexible electronics is a field gathering a growing interest among researchers and companies with widely varying applications, such as organic light emitting diodes, transistors as well as many different sensors. If the circuit should be portable or off-grid, the power sources available are batteries, supercapacitors or some type of power generator. Thermoelectric generators produce electrical energy by the diffusion of charge carriers in response to heat flux caused by a temperature gradient between junctions of dissimilar materials. As wearables, flexible electronics and intelligent packaging applications increase, there is a need for low-cost, recyclable and printable power sources. For such applications, printed thermoelectric generators (TEGs) are an interesting power source, which can also be combined with printable energy storage, such as supercapacitors. Poly(3,4-ethylenedioxythiophene)-poly(styrenesulfonate), or PEDOT:PSS, is a conductive polymer that has gathered interest as a thermoelectric material. Plastic substrates are commonly used for printed electronics, but an interesting and emerging alternative is to use paper. In this article, a printed thermoelectric generator consisting of PEDOT:PSS and silver inks was printed on two common types of paper substrates, which could be used to power electronic circuits on paper. Full article
(This article belongs to the Special Issue Flexible Electronics and Self-Powered Systems)
Show Figures

Figure 1

9 pages, 292 KiB  
Article
Improving the Performance of Turbo-Coded Systems under Suzuki Fading Channels
J. Low Power Electron. Appl. 2019, 9(2), 13; https://doi.org/10.3390/jlpea9020013 - 29 Mar 2019
Cited by 1 | Viewed by 6057
Abstract
In this paper, the performance of coded systems is considered in the presence of Suzuki fading channels, which is a combination of both short-fading and long-fading channels. The problem in manipulating a Suzuki fading model is the complicated integration involved in the evaluation [...] Read more.
In this paper, the performance of coded systems is considered in the presence of Suzuki fading channels, which is a combination of both short-fading and long-fading channels. The problem in manipulating a Suzuki fading model is the complicated integration involved in the evaluation of the Suzuki probability density function (PDF). In this paper, we calculated noise PDF after the zero-forcing equalizer (ZFE) at the receiver end with several approaches. In addition, we used the derived PDF to calculate the log-likelihood ratios (LLRs) for turbo-coded systems, and results were compared to Gaussian distribution-based LLRs. The results showed a 2 dB improvement in performance compared to traditional LLRs at 10 6 of the bit error rate (BER) with no added complexity. Simulations were obtained utilizing the Matlab program, and results showed good improvement in the performance of the turbo-coded system with the proposed LLRs compared to Gaussian-based LLRs. Full article
(This article belongs to the Special Issue Emerging Interconnection Networks Across Scales)
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop