kompiuterio valymą Jums duoda:
skelbimai.e2.lt
Valymo priminimai | Gauti kodą | Atsisiuntimai

Mokslinės naujienos


Publication date: September 2018
Source:Computer Languages, Systems & Structures, Volume 53









Publication date: September 2018
Source:Computer Languages, Systems & Structures, Volume 53

Author(s): Pedro Pinto, Tiago Carvalho, João Bispo, Miguel António Ramalho, João M.P. Cardoso

Usually, Aspect-Oriented Programming (AOP) languages are an extension of a specific target programming language (e.g., AspectJ for Java and AspectC++ for C++). Although providing AOP support with target language extensions may ease the adoption of an approach, it may impose constraints related with constructs and semantics. Furthermore, by tightly coupling the AOP language to the target language the reuse potential of many aspects, especially the ones regarding non-functional requirements, is lost. LARA is a domain-specific language inspired by AOP concepts, having the specification of source-to-source transformations as one of its main goals. LARA has been designed to be, as much as possible, independent of the target language and to provide constructs and semantics that ease the definition of concerns, especially related to non-functional requirements. In this paper, we propose techniques to overcome some of the challenges presented by a multilanguage approach to AOP of cross-cutting concerns focused on non-functional requirements and applied through the use of a weaving process. The techniques mainly focus on providing well-defined library interfaces that can have concrete implementations for each supported target language. The developer uses an agnostic interface and the weaver provides a specific implementation for the target language. We evaluate our approach using 8 concerns with varying levels of language agnosticism that support 4 target languages (C, C++, Java and MATLAB) and show that the proposed techniques contribute to more concise LARA aspects, high reuse of aspects, and to significant effort reductions when developing weavers for new imperative, object-oriented programming languages.





Publication date: September 2018
Source:Computer Languages, Systems & Structures, Volume 53

Author(s): Mengmeng Zhu, Hoang Pham

Most existing software reliability growth models (SGRMs) often assume software faults are mutually independent and the detected faults can be perfectly removed. However, those two assumptions are not realistic in practice since the dependent faults can also exist in the program. At the same time, it is unlikely to correct all the detected faults in the testing phase due to the limitation of testing resource, the skill and experience of the programmer, and multi-release consideration for software organization. This paper presents a new non-homogeneous Poisson process (NHPP) software reliability model with a pioneering idea by considering software fault dependency and imperfect fault removal. In order to clearly explain software fault dependency, some facts and examples are discussed in Section 1. Two types of software faults are defined, Type I (independent) fault and Type II (dependent) fault, according to fault dependency. Two phases debugging processes, Phase I and Phase II, are proposed according to the debugged software fault type. A small portion of software faults that software testers are not able to remove is also considered in both phases in the proposed model. The illustration of the model effectiveness is based on the three datasets collected from industries. Some limitations of the proposed model are also discussed in the last section.






Publication date: April 2018
Source:Computer Fraud & Security, Volume 2018, Issue 4



Breaches of computer systems belonging to some high-profile brands have resulted in leaks of millions of customer records. In one case the leaks involve payment card details and some of the incidents date back several months.





Publication date: April 2018
Source:Computer Fraud & Security, Volume 2018, Issue 4

Author(s): Steve Mansfield-Devine

The plot thickens in the spat between Apple and the FBI. It now seems that elements within the FBI withheld information about the agency’s ability to crack iPhones because it would have been useful to get a court judgment against Apple and set a legal precedent.





Publication date: April 2018
Source:Computer Fraud & Security, Volume 2018, Issue 4



As Facebook continues to reel from revelations about the way it has allowed third parties to exploit its users’ data, gay dating app Grindr, which has 3.6 million daily active users, has joined the ranks of firms accused of over-sharing.






Publication date: August 2018
Source:Computer Communications, Volume 126

Author(s): Kaikai Chi, Yi-hua Zhu, Yanjun Li

In the wireless powered communication networks (WPCNs) where nodes are powered by the energy harvested from radio-frequency (RF) transmissions, efficiently scheduling the downlink wireless energy transfer (WET) time and the uplink wireless information transmission (WIT) time is critical to achieve good throughput performance. In this paper, the following type of star-topology WPCNs are considered: each node has its own desired throughput but the throughput demands of all nodes cannot be satisfied due to the nodes’ very low energy harvesting rates. For such WPCNs, it is meaningful to minimize the sum throughput-gap. Unfortunately, this requirement cannot be satisfied by the existing data collecting schemes like the sum throughput maximization (STM) scheme. We study the weighted sum throughput-gap minimization (W-STGM) by jointly optimizing the time allocations for the WET and the WITs. Specifically, we first formulate the W-STGM problem as a non-linear optimization problem and prove that it covers the STM problem studied before as a special case, where all nodes have the same throughput weight and the nodes’ throughput demands are too high. Second, after proving it is non-convex, we decompose it into two sub-problems: the master problem, which determines the optimal WET time, and the slave problem, which determines the optimal time allocations to WITs for a given WET time. Considering that the slave problem is convex, we develop a dual decomposition method to solve it. Meanwhile, we design a golden section search algorithm to solve the master problem. Simulation results show that, compared to the STM, the W-STGM can satisfy in an adequate manner the throughput demands of the nodes by avoiding node throughput over-provisioning, which wastes system resource, and also increasing the throughput of nodes with large throughput weights by up to several tens of percentage points.





Publication date: July 2018
Source:Computer Communications, Volume 125

Author(s): P.Y. Dibal, E.N. Onwuka, J. Agajo, C.O. Alenoghena

As wireless devices and applications increase, it is envisioned that spectrum utilization by licensed users will go from low to medium occupancy state. The CRs will need to sense wider bands to obtain free channels. Therefore development of enhanced wideband sensing algorithms is needed. Enhancing old tools for new applications could be quite useful. The discrete wavelet packet transform (DWPT) is a good mathematical tool that can be enhanced for better application in wireless communications. This paper presents an algorithm to identify spectrum holes in a cognitive radio system. The algorithm is based on the application of discrete wavelet packet transform enhanced with Hilbert transform in spectrum sensing. The enhancement with Hilbert transform has the effect of sharpening the PSD edges for better detection. Using histogram analysis for a discrete wavelet packet decomposed signal, the algorithm which we call DWPT-HiSHIA (Discrete Wavelet Packet Transform – Histogram Spectrum Hole Identification Algorithm) determines if a sub-band channel has a spectrum hole or not. Simulation results show the effectiveness of the algorithm in the identification of spectrum holes in sub-band channels for a discrete wavelet packet decomposed signal.





Publication date: July 2018
Source:Computer Communications, Volume 125

Author(s): Alberto Blanco-Justicia, Josep Domingo-Ferrer

The number of online service accounts per person has rapidly increased over the last years. Currently, people have tens to hundreds of online accounts on average, and it is clear that users do not choose new, different, and strong passwords for each of these accounts. On the other hand, it is quite inconvenient for the user to be forced to explicitly authenticate each time she wants to use one of her many accounts; this is especially true with small user devices like smartphones. Implicit authentication is a way to mitigate the preceding problems by authenticating individuals based not only on their identity and credentials, but on how they interact with a device, i.e. their behavior. User behavior can be characterized by collecting keystroke patterns, browser history and configuration, IP addresses and location, among other characteristics of the user. However, keeping the user’s behavior profile in authentication servers can be viewed as privacy-invasive. Privacy-preserving implicit authentication has been recently introduced to protect the privacy of the users’ profiles, specifically against the party performing the authentication, which we call the server in the sequel. Yet, the privacy-preserving implicit authentication schemes proposed so far involve substantial computation both by the user and the server. We propose here a practical mechanism based on comparing behavior feature sets encoded as Bloom filters. The new mechanism entails much less computation and can accommodate much more comprehensive sets of features than previous alternatives.