This paper presents an eHealth use case basedon a privacy-preserving machine learning platform to detectarrhythmia developed by the PAPAYA project that can run inan untrusted domain. It discusses legal privacy and user requirementsthat we elicited for this use case from the GDPR andvia stakeholder interviews. These include requirements for securepseudonymisation schemes, for allowing also pseudonymous usersto exercise their data subjects rights, for not making diagnosticdecisions fully automatically and for assurance guarantees, conformancewith specified standards and informing clinicians andpatients about the privacy protection. The requirements are notonly relevant for our use case but also for other use cases utilisingprivacy-preserving data analytics to classify medical data.
Open experimentation with operational Mobile Broadband (MBB) networks in the wild is currently a fundamental requirement of the research community in its endeavor to address the need of innovative solutions for mobile communications. Even more, there is a strong need for objective data about stability and performance of MBB (e.g., 3G/4G) networks, and for tools that rigorously and scientifically assess their status. In this paper, we introduce the MONROE measurement platform: an open access and flexible hardware-based platform for measurements and custom experimentation on operational MBB networks. The MONROE platform enables accurate, realistic and meaningful assessment of the performance and reliability of 11 MBB networks in Europe. We report on our experience designing, implementing and testing the solution we propose for the platform. We detail the challenges we overcame while building and testing the MONROE testbed and argue our design and implementation choices accordingly. We describe and exemplify the capabilities of the platform and the wide variety of experiments that external users already perform using the system
Mifare Classic is a very popular near-field communication technology that provides a shared-key, access-controlled, storage. Although the authentication protocol of Mifare Classic is compromised since half a decade, systems are still being deployed based on this technology, e.g. for access control systems and for public transport ticketing. By using commodity hardware, such as NFC enabled smartphones, by passing the security measures in some cases only require the installation and operation of a smartphone app. To this end, we present case studies of a number of recent Mifare Classic systems deployed during the last year, to serve as an illustration of practical security problems and to raise awareness thereof among NFC technology buyers and system implementors.
This thesis presents research on transport layer behavior in wireless networks. As the Internet is expanding its reach to include mobile devices, it has become apparent that some of the original design assumptions for the dominant transport protocol, TCP, are approaching their limits. A key feature of TCP is the congestion control algorithm, constructed with the assumption that packet loss is normally very low, and that packet loss therefore is a sign of network congestion. This holds true for wired networks, but for mobile wireless networks non-congestion related packet loss may appear. The varying signal power inherent with mobility and handover between base-stations are two example causes of such packet loss. This thesis provides an overview of the challenges for TCP in wireless networks together with a compilation of a number of suggested TCP optimizations for these environments. A TCP modification called TCP-L is proposed. It allows an application to increase its performance, in environments where residual bit errors normally give a degraded throughput, by making a reliability tradeoff. The performance of TCP-L is experimentally evaluated with an implementation in the Linux kernel. The transport layer performance in a 4G scenario is also experimentally investigated, focusing on the impact of the link layer design and its parameterization. Further, for emulation-based protocol evaluations, controlled packet loss and bit error generation is shown to be an important aspect.
This paper presents a technique to improve the performance of TCP and the utilization of wireless networks.Wireless links exhibit high rates of bit errors, compared to communication over wireline or fiber. Since TCP cannotseparate packet losses due to bit errors versus congestion,all losses are treated as signs of congestion and congestionavoidance is initiated. This paper explores the possibility of accepting TCP packets with an erroneous checksum, toimprove network performance for those applications that can tolerate bit errors. Since errors may be in the TCP header aswell as the payload, the possibility of recovering the headeris discussed. An algorithm for this recovery is also presented.Experiments with an implementation have been performed,which show that large improvements in throughput can beachieved, depending on link and error characteristics.
This paper presents a wireless link and networkemulator, based upon the "Wireless IP" 4G system proposalfrom Uppsala University and partners. In wireless fading down-links (base to terminals) link-level frames are scheduled andthe transmission is adapted on a fast time scale. With fastlink adaptation and fast link level retransmission, the fading properties of wireless links can to a large extent be counteractedat the physical and link layers. A purpose of the emulatoris to investigate the resulting interaction with transport layer protocols. The emulator is built on Internet technologies, andis installed as a gateway between communicating hosts. The paper gives an overview of the emulator design, and presentspreliminary experiments with three different TCP variants. The results illustrate the functionality of the emulator by showing theeffect of changing link layer parameters on the different TCP variants.
This paper presents results from an experimental study of TCP in a wireless 4G evaluation system. Test-bed results on transport layer performance are presented and analyzed in relation to several link layer aspects. The aspects investigated are the impact of channel prediction errors, channel scheduling, delay, and adaptive modulation switch level, on TCP performance. The paper contributes a cross-layer analysis of the interaction between symbol modulation levels, different scheduling strategies, channel prediction errors and the resulting frame retransmissions effect on TCP. The paper also shows that highly persistent ARQ with fast link retransmissions do not interact negatively with the TCP retransmission timer even for short round trip delays.
This paper presents a wireless link and network emulator,along with experiments and validation against the "Wireless IP" 4G system proposal from Uppsala University and partners. In wireless fading downlinks (base to terminals) link-level frames are scheduled and the transmission is adapted on a fast time scale. With fast link adaptation and fast link level retransmission, the fading properties of wireless links can to a large extent be counteracted at thephysical and link layers. The emulator has been used to experimentally investigate the resulting interaction between the transport layer and the link layer. The paper gives an overview of the emulator design, and presents experimental results with three different TCP variants in combination with various link layer characteristics.
The performance of applications in wireless networks is partly dependent upon the link configuration. Link characteristics varies with frame retransmission persistency, link frame retransmission delay, adaptive modulation strategies, coding, and more. The link configuration and channel conditions can lead to packet loss, delay and delay variations, which impact different applications in different ways. A bulk transfer application may tolerate delays to a large extent, while packet loss is undesirable. On the other hand, real-time interactive applications are sensitive to delay and delay variations, but may tolerate packet loss to a certain extent. This paper contributes a study of the effect of link frame retransmission persistency and delay on packet loss and latency for real-time interactive applications. The results indicate that a reliable retransmission mechanism with fast link retransmissions in the range of 2-8 ms is sufficient to provide an upper delay bound of 50 ms over the wireless link, which is well within the delay budget of voice over IP applications.
This paper presents a wireless link and network emulator for the "Wireless IP" 4G system proposal from Uppsala University and partners. In wireless fading downlinks (base to terminals) link-level frames are scheduled and the transmission is adapted on a fast time scale. With fast link adaptation and fast link level retransmission, the fading properties of wireless links can to a large extent be counteracted at the physical and link layers. The emulator has been used to experimentally investigate the resulting interaction between the transport layer and the physical/link layer in such a downlink. The paper introduces the Wireless IP system, describes the emulator design and implementation, and presents experimental results with TCP in combination with various physical/link layer parameters. The impact of link layer ARQ persistency, adaptive modulation, prediction errors and simple scheduling are all considered.
Existing censorship measurement platforms frequentlysuffer from poor adoption, insufficient geographic coverage, and scalability problems. In order to outline ananalytical framework and data collection needs for futureubiquitous measurements initiatives, we build on top ofthe existent and widely-deployed RIPE Atlas platform.In particular, we propose methods for monitoring thereachability of vital services through an algorithm thatbalances timeliness, diversity, and cost. We then use Atlas to investigate blocking events in Turkey and Russia.Our measurements identify under-examined forms of interference and provide evidence of cooperation betweena well-known blogging platform and government authorities for purposes of blocking hosted content.
Organisationer och ledare saknar ofta en förståelse om kärnan i ett förändringsarbete, förståelsen om hur människor fungerar. Förändring leder ofta till starka känslor och vissa drabbas hårdare än andra, medarbetare kan lätt omvandlas till motståndare om ledningen inte möter personalen rätt.
Syftet med denna uppsats är att ge en kunskapsbas i hur en projektgrupp ska arbeta mot användare för att motivera dem till en positiv inställning inför förändringen. Den ger en förståelse för människors behov, drivkrafter och olika orsaker till varför motstånd till förändring kan uppstå hos de berörda.
Datainsamlingen har skett genom en litteraturstudie inom området för att sedan samla in empiri genom halvstrukturerade intervjuer. Samtliga respondenter kommer från en segelflygsförening och syftet med det var att ta reda på hur föreningen hanterat olika förändringsarbeten. Dels ett redan avslutat förändringsarbete då de övergick till ett system för internetbokning av flygplan och även ett förändringsarbete som inte realiserats än, en digital lösning till deras tidsrapportering.
Det finns flera olika alternativ som en ledare kan applicera för att motivera sina anställda vid ett förändringsarbete. Dessa tillvägagångssätt kan vara effektiva på sina egna vis, mycket beroende på sammanhanget och det specifika fallet. För att effektivisera momentet motivation gäller det att välja rätt tillvägagångssätt vid rätt tillfälle. Något som förutsätter att ledare har en förståelse för individen och de reaktioner denne har gentemot förändringen.
The NEAT system was developed in 2017 to increase flexibility in the choice of network transport protocol being used. One of the most important components of the NEAT system is the Policy Manager (PM), which determines what protocol is to be utilized by the application. The PM is written in Python while the rest of the NEAT system is C-based, so a natural evolution of the PM is to perform a functional translation of it to C. While the main goal was solely to develop a fully functional C-based PM, the difference in programming languages in the end also brought a performance increase of 28 times compared with the Python-based PM. There are still a few improvements left to do in the PM, but it is already a notable improvement for the NEAT system as a whole.
Experio Lab i Landstinget i Värmland, Karolinska Institutet och Kungliga TekniskaHögskolan håller i dag på med att utveckla en digital tjänst i form av en app försmärtpatienter. Målet är att genom en digital tjänst kunna lära sig mer om hur patienter kankartlägga sina besvär inför ett möte med sjukvården.
Uppsatsen syftar till att inför kommande användbarhetstester på riktiga patienterundersöka hur väl prototypen fungerar när det kommer till gränssnittet och ge förslag somskulle kunna förbättra appen inför de kommande användbarhetstesterna.
Undersökningen genomfördes i två steg: Först testades appen som en pappersprototyp.Därefter genomfördes användbarhetstester med hjälp av studenter och experter inomtjänstedesign på en ipad-platta, där både intervjuer och observationer gjordes. Resultatetvisade att flera av testpersonerna hade samma sorts svårigheter när det kom till att interageramed appen. Svårast var att rita sina besvär på besvärsteckningen samt att veta var i processenman befann sig. Efter genomförande av studien uppgav de flesta testpersonerna att de skullerekommendera appen till någon de kände. Majoriteten såg även behov av att appensanvändargränssnitt blir tydligare för användaren.
People engage with multiple online services and carry out a range of different digital transactions with these services. Registering an account, sharing content in social networks, or requesting products or services online are a few examples of such digital transactions. With every transaction, people take decisions and make disclosures of personal data. Despite the possible benefits of collecting data about a person or a group of people, massive collection and aggregation of personal data carries a series of privacy and security implications which can ultimately result in a threat to people's dignity, their finances, and many other aspects of their lives. For this reason, privacy and transparency enhancing technologies are being developed to help people protect their privacy and personal data online. However, some of these technologies are usually hard to understand, difficult to use, and get in the way of people's momentary goals.
The objective of this thesis is to explore, and iteratively improve, the usability and user experience provided by novel privacy and transparency technologies. To this end, it compiles a series of case studies that address identified issues of usable privacy and transparency at four stages of a digital transaction, namely the information, agreement, fulfilment and after-sales stages. These studies contribute with a better understanding of the human-factors and design requirements that are necessary for creating user-friendly tools that can help people to protect their privacy and to control their personal information on the Internet.
We explore how concepts from the field of network science can be employed to inform Internet users about the way their personal identifiable information (PII) is being used and shared by online services. We argue that presenting users with graphical interfaces that display information about the network structures that are formed by PII exchanges can have an impact on the decisions users take online, such as the services they choose to interact with and the information they decide release.
The PrimeLife Policy Language (PPL) has the objective of helping end users make the data handling practices of data controllers more transparent, allowing them to make well-informed decisions about the release of personal data in exchange for services. In this chapter, we present our work on user interfaces for the PPL policy engine, which aims at displaying the core elements of a data controller's privacy policy in an easily understandable way as well as displaying how far it corresponds with the user's privacy preferences. We also show how privacy preference management can be simplified for end users.
Whereas in real everyday life individuals have an intuitive approach at deciding which information to disseminate to others, in the digital world it becomes difficult to keep control over the information that is distributed to different online services. In this paper we present the design of a user interface for a system that can help users decide which pieces of information to distribute to which type of service providers by allowing them to segregate their information attributes into various personalized profiles. Iterative usability evaluations showed that users understand and appreciate the possibility to segregate information, and revealed possible improvements, implications and limitations of such an interface.
Mobile Edge Clouds (MECs) address the critical needs of bandwidth-intensive, latency-sensitive mobile applications by positioning computing and storage resources at the network’s edge in Edge Data Centers (EDCs). However, the diverse, dynamic nature of EDCs’ resource capacities and user mobility poses significant challenges for resource allocation and management. Efficient EDC operation requires accurate forecasting of computational load to ensure optimal scaling, service placement, and migration within the MEC infrastructure. This task is complicated by the temporal and spatial fluctuations of computational load.We develop a novel MEC computational demand forecasting method using Federated Learning (FL). Our approach leverages FL’s distributed processing to enhance data security and prediction accuracy within MEC infrastructure. By incorporating uncertainty bounds, we improve load scheduling robustness. Evaluations on a Tokyo dataset show significant improvements in forecast accuracy compared to traditional methods, with a 42.04% reduction in Mean Absolute Error (MAE) using LightGBM and a 34.93% improvement with CatBoost, while maintaining minimal networking overhead for model transmission.
Data quality assessment has become a prominent component in the successful execution of complex data-driven artificial intelligence (AI) software systems. In practice, real-world applications generate huge volumes of data at speeds. These data streams require analysis and preprocessing before being permanently stored or used in a learning task. Therefore, significant attention has been paid to the systematic management and construction of high-quality datasets. Nevertheless, managing voluminous and high-velocity data streams is usually performed manually (i.e. offline), making it an impractical strategy in production environments. To address this challenge, DataOps has emerged to achieve life-cycle automation of data processes using DevOps principles. However, determining the data quality based on a fitness scale constitutes a complex task within the framework of DataOps. This paper presents a novel Data Quality Scoring Operations (DQSOps) framework that yields a quality score for production data in DataOps workflows. The framework incorporates two scoring approaches, an ML prediction-based approach that predicts the data quality score and a standard-based approach that periodically produces the ground-truth scores based on assessing several data quality dimensions. We deploy the DQSOps framework in a real-world industrial use case. The results show that DQSOps achieves significant computational speedup rates compared to the conventional approach of data quality scoring while maintaining high prediction performance.
The Helios voting scheme is well studied including formal proofs for verifiability and ballot privacy. However, depending on its version, the scheme provides either participation privacy (hiding who participated in the election) or verifiability against malicious bulletin board (preventing election manipulation by ballot stuffing), but not both at the same time. It also does not provide receipt-freeness, thus enabling vote buying by letting the voters construct receipts proving how they voted. Recently, an extension to Helios, further referred to as KTV-Helios, has been proposed that claims to provide these additional security properties. However, the authors of KTV-Helios did not prove their claims. Our contribution is to provide formal definitions for participation privacy and receipt-freeness that we applied to KTV-Helios. In order to evaluate the fulfillment of participation privacy and receipt-freeness, we furthermore applied the existing definition of ballot privacy, which was also used for evaluating the security of Helios, in order to show that ballot privacy also holds for KTV-Helios
Informational privacy of individuals has significantly gained importance after information technology has become widely deployed. Data, once digitalised, can be copied, distributed, and long-term stored at negligible costs. This has dramatic consequences for individuals that leave traces in the form of personal data whenever they interact with information technology, for instance, computers and phones; or even when information technology is recording the personal data of aware or unaware individuals. The right of individuals for informational privacy, in particular to control the flow and use of their personal data, is easily undermined by those controlling the information technology.
The objective of this thesis is to study the measurement of informational privacy with a particular focus on scenarios where an individual discloses personal data to a second party which uses this data for re-identifying the individual within a set of other individuals. We contribute with privacy metrics for several instances of this scenario in the publications included in this thesis, most notably one which adds a time dimension to the scenario for modelling the effects of the time passed between data disclosure and usage. The result is a new framework for inter-temporal privacy metrics.
This paper reports our initial findings regarding the state of testing of software in the Swedish public health information system. At present, the system is only available through a black-box interface, i.e. through the GUI. This and other issues related to politics, management and organization indicate that much work is needed in order for the software to have the quality level expected by a safety-critical system. The proposed solution by the public health organization for raising the quality is to use an independent test database. Based on our initial understanding of the problem, we argue that there might be other solutions that would perhaps be more cost-effective and have a stronger impact on the quality of the system. Our main contribution lies in the data analysis, where we have collected the problems and suggested alternative cost-saving solutions.
In empirical software engineering research there is an increased use of questionnaires and surveys to collect information from practitioners. Typically, such data is then analyzed based on overall, descriptive statistics. Even though this can capture the general trends there is a risk that the opinions of different (minority) sub-groups are lost. Here we propose the use of clustering to segment the respondents so that a more detailed analysis can be achieved. Our findings suggest that it can give a better insight about the survey population and the participants' opinions. This partitioning approach can show more precisely the extent of opinion differences between different groups. This approach also gives an opportunity for the minorities to be heard. Through the process significant new findings may also be obtained. In our example study regarding the state of testing and requirement activities in industry, we found several significant groups that showed significant opinion differences from the overall conclusion.
Privacy-enhancing technologies for the Smart Grid usually address either the consolidation of users’ energy consumption or the verification of billing information. The goal of this paper is to introduce iKUP, a protocol that addresses both problems simultaneously. iKUP is an efficient privacy-enhancingprotocol based on DC-Nets and Elliptic Curve Cryptography as Commitment. It covers the entire cycle of power provisioning, consumption, billing, and verification. iKUP allows: (i) utility providers to obtain a consolidated energy consumption value that relates to the consumption of a user set, (ii) utility providers to verify the correctness of this consolidated value, and (iii) the verification of the correctness of the billing information by both utility providers and users. iKUP prevents utility providers from identifying individual contributions to the consolidated value and, therefore, protects the users’ privacy. The analytical performance evaluation of iKUP is validated through simulation using as input a real-world data set with over 157 million measurements collected from 6,345 smart meters. Our results show that iKUP has a worse performance than other protocols in aggregationand decryption, which are operations that happen only once per round of measurements and, thus, have a low impactin the total protocol performance. iKUP heavily outperformsother protocols in encryption, which is the most demanded cryptographic function, has the highest impact on the overall protocol performance, and it is executed in the smart meters.
Digital societies increasingly rely on secure communication between parties. Certificate enrollment protocols are used by certificate authorities to issue public key certificates to clients. Key agreement protocols, such as Diffie-Hellman, are used to compute secret keys, using public keys as input, for establishing secure communication channels. Whenever the keys are generated by clients, the bootstrap process requires either (a) an out-of-band verification for certification of keys when those are generated by the clients themselves, or (b) a trusted server to generate both the public and secret parameters. This paper presents a novel constrained key agreement protocol, built upon a constrained Diffie-Hellman, which is used to generate a secure public-private key pair, and to set up a certification environment without disclosing the private keys. In this way, the servers can guarantee that the generated key parameters are safe, and the clients do not disclose any secret information to the servers.
The Stream Control Transmission Protocol (SCTP) is a relatively recent general-purpose transport layer protocol for IP networks that has been introduced as a complement to the well-established TCP and UDP transport protocols. Although initially conceived for the transport of PSTN signaling messages over IP networks, the introduction of key features in SCTP, such as multihoming and multistreaming, has spurred considerable research interest surrounding SCTP and its applicability to different networking scenarios. This article aims to provide a detailed survey of one of these new features—multihoming—which, as it is shown, is the subject of evaluation in more than half of all published SCTP-related articles. To this end, the article first summarizes and organizes SCTP-related research conducted so far by developing a four-dimensional taxonomy reflecting the (1) protocol feature examined, (2) application area, (3) network environment, and (4) study approach. Over 430 SCTP-related publications have been analyzed and classified according to the proposed taxonomy. As a result, a clear perspective on this research area in the decade since the first protocol standardization in 2000 is given, covering both current and future research trends. On continuation, a detailed survey of the SCTP multihoming feature is provided, examining possible applications of multihoming, such as robustness, handover support, and loadsharing.
During the last years e-governance is being implemented in many countries. Within the same country, the level of achieved results can vary significantly between sectors. The implementation of e-governance in the Republic of Moldova has had a good start but some stagnation in the implementation of the e-governance agenda is registered. In the educational sector, the implementation is still at a low level. This practical paper surveys the e-tools in the educationa lsector of the Republic of Moldova, thus revealing the e-governance level of the sector. By comparing with the usage of IT tools in the Swedish educational system, and identifying the benefits and issues met during their development, it proposes a way for future implementation of the e-governance agenda in the educational sector in Moldova. While Moldova as a country has extensive Internet coverage, Sweden was choose for the comparison because of its Internet coverage plus its focus on furthering the skills of its workforce and also the considerable efforts of e-governance agenda implementation.
Network Function Virtualization (NFV) is an emerging network architecture to increase flexibility and agility within operator's networks by placing virtualized services on demand in Cloud data centers (CDCs). One of the main challenges for the NFV environment is how to minimize network latency in the rapidly changing network environments. Although many researchers have already studied in the field of Virtual Machine (VM) migration and Virtual Network Function (VNF) placement for efficient resource management in CDCs, VNF migration problem for low network latency among VNFs has not been studied yet to the best of our knowledge. To address this issue in this article, we i) formulate the VNF migration problem and ii) develop a novel VNF migration algorithm called VNF Real-time Migration (VNF-RM) for lower network latency in dynamically changing resource availability. As a result of experiments, the effectiveness of our algorithm is demonstrated by reducing network latency by up to 70.90% after latency-aware VNF migrations.
Affärssystemprojekt är komplexa och anses vara ett av de mest kritiska IT-projekten. När ett affärssystemprojekt lyckas kan det resultera i minskade kostnader, snabbare produktionsprocess och ökad kundservicegrad. Ett affärssystem kan därför vara ett bra verktyg för att öka ett företags chanser gentemot dess konkurrenter. Införande av affärssystem är en process som är förenat med stora risker, särskilt ifall kundföretaget inte har planerat införandet rätt. Det är viktigt att förstå riskerna med införandeprojekt och även förstå hur dessa risker kan undvikas och hanteras. Redan vid planeringsfasen i införandeprojekt är det viktigt att ta hänsyn till risker.
Syftet med denna kandidatuppsats i informatik är att identifiera och beskriva hur riskhantering reducerar riskerna i införandeprojekt av affärssystem, ur en systemimplementatörs perspektiv. Metoden som valts är fallstudiemetoden, ett anonymiserat fallföretags införandeprojekt har studerats genom semistrukturerade intervjuer med fyra respondenter, som har olika roller i införandeprojektet. Inför de semistrukturerade intervjuerna har en intervjuguide tagits fram utifrån en analysmodell. Den fakta analysmodellen bygger på kommer från litteraturstudier gjorda på forskningsartiklar inom ämnet.
De viktigaste slutsatserna är att implementatörsföretaget håller i utbildningen för slutanvändarna, att personer från ledningen sitter med i styrgruppen och supportpersonal tas med i projektet innan driftstart. Genom att utbildningen hålls av implementatörsföretaget minskar stressen på kundföretagets anställda samt att alla slutanvändare blir upplärda av experter inom affärssystemet. Att personer från ledningen är med i styrgruppen visar att de ger stöd och engagemang, samtidigt som lednings personer är med vid beslutfattande ifall risker dyker upp under projektets gång. Genom att supportpersonal är med i projektet redan innan driftstart ger det supportpersonalen chans att lära sig kundens system samt att kunden får lära känna supportpersonalen innan förvaltning. Det skapar en naturlig kommunikationsväg för kunden, att ringa till support med sina ärenden.
IT analytiker har många gånger gett IBMs stordator en dödsdom. Redan 1991 skrev denerkände kritikern Stewart Alsop, dåvarande chefredaktör på InfoWorld att den sista stordatornskulle tas ur bruk den 15 mars 1996. Det visade sig att detta uttalande var felaktigt, vilketStewart Alsop även erkände i InfoWorld bara dagar innan hans förutsägelse skulle ha infallit.När vi nu går in i ytterligare ett paradigmskifte i och med att många tjänster går över tillmolnet, ställer jag i denna uppsats frågan om IBMs stordator har en framtid i en molnbaseradIT värld, och hur den i så fall ser ut.
Syftet är att genom litteraturstudier och intervjuer undersöka om IBMs stordator kan överlevaytterligare en genomomgripande teknikrevolution eller om den har spelat ut sin roll.Undersökningen mynnar ut i slutsatserna att IBMs stordator har en stark position i dagsläget,framförallt inom bank och finanssektorn d.v.s. inom branscher med speciellt höga kravbeträffande tillgänglighet, skalbarhet, och säkerhet. Sannolikt har stordatorn en viktig roll attspela även för framtidens satsningar i molnet. IBM erbjuder redan idag molnlösningar sominkluderar mainframes, och det framgick även i de intervjuer som gjordes på IBM, att de seren ljus framtid för IBMs stordatorer. De menar att IBM inte bara följer med, utan även är medoch leder utvecklingen inom molntjänster, och att det är främst för de öppna standarderna somLinux och Unix som IBM kommer att ha den ljusaste framtiden. Det faktum att IBM varje årinvesterar miljardbelopp i utvecklingen av sina stordatorer talar också sitt tydliga språk, d.v.s.att IBM fullt ut verkar tro på att stordatorn har en viktig roll att spela i den molnbaserade ITvärldsom just nu växer fram
he advances in mobile technologies enable mobile devices to perform tasks that are traditionally run by personal computers as well as provide services to the others. Mobile users can form a service sharing community within an area by using their mobile devices. This paper highlights several challenges involved in building such service compositions in mobile communities when both service requesters and providers are mobile. To deal with them, we first propose a mobile service provisioning architecture named a mobile service sharing community and then propose a service composition approach by utilizing the Krill-Herd algorithm. To evaluate the effectiveness and efficiency of our approach, we build a simulation tool. The experimental results demonstrate that our approach can obtain superior solutions as compared with current standard composition methods in mobile environments. It can yield near-optimal solutions and has a nearly linear complexity with respect to a problem size.
Service mashups are applications created by combining single-functional services (or APIs) dispersed over the web. With the development of cloud computing and web technologies, service mashups are becoming more and more widely used and a large number of mashup platforms have been produced. However, due to the proliferation of services on the web, how to select component services to create mashups has become a challenging issue. Most developers pay more attention to the QoS (quality of service) and cost of services. Beside service selection, mashup deployment is another pivotal process, as the platform can significantly affect the quality of mashups. In this paper, we focus on creating service mashups from the perspective of developers. A genetic algorithm-based method, GA4MC (genetic algorithm for mashup creation), is proposed to select component services and deployment platforms in order to create service mashups with optimal cost performance. A series of experiments are conducted to evaluate the performance of GA4MC. The results show that the GA4MC method can achieve mashups whose cost performance is extremely close to the optimal . Moreover, the execution time of GA4MC is in a low order of magnitude and the algorithm performs good scalability as the experimental scale increases.
Brute-force attacks are a prevalent phenomenon that is getting harderto successfully detect on a network level due to increasing volume and en-cryption of network traffic and growing ubiquity of high-speed networks.Although the research in this field advanced considerably, there still remainclasses of attacks that are undetectable. In this chapter, we present sev-eral methods for the detection of brute-force attacks based on the analysisof network flows. We discuss their strengths and shortcomings as well asshortcomings of flow-based methods in general. We also demonstrate thefragility of some methods by introducing detection evasion techniques.
Reasoning graphs are one of many ways to visualize information. It is very hard to understand certain type of information when it is presented in text or in tables with a huge amount of numbers. It is easier to present it graphically. People can have a general idea of the information and if it is necessary to see the details, it is possible to have a way to add more information to the graphical display. A graphical visualization is able to compress the information, which represented in text can be thousand of lines, to be shown in only one image or in a set of them. Therefore it is a very powerful to transmit information in a blink of an eye, and people will not waste time reading many lines, values or numbers in text or tables.
The developed application has a graphical user interface in 3D which shows the information in a reasoning graph. The user can navigate through the graph in a 3D way, expand the nodes to see more information and change the position of them to restructure the graph.
Den här studien ämnar undersöka användbarheten hos två olika versioner avdokumentbiblioteket Arkiva som ägs av Projektengagemang. Arkiva finns i två versioner, ensom har ansetts vara föråldrad och en nyutvecklad som ännu är otestad. I den här studienkommer de att ställas mot varandra för att undersöka om den nya versionen är en förbättringsamt hur användarna uppfattar designen och dess funktionalitet.Metoden som används i undersökningen är användbarhetstester där version 1 jämförs medversion 2 genom att åtta testpersoner utför grundläggande uppgifter som motsvarar de
arbetsuppgifter som är tänka att utföras i systemet. Under testerna tillämpades think aloud-metoden. För att utvärdera användarnas uppfattning ytterligare följdes testerna upp med en
intervju efteråt där de fick svara på frågor gällande de båda dokumentbibliotekens design ochfunktionalitet.Resultaten från undersökningen visar att deltagarna föredrog att interagera med system somkänns mer moderna och intuitiva. Undersökningen visar att användbarhetsattribut är någotväldigt viktigt att ta hänsyn till under utvecklingen av nya system för att öka graden avkundnöjdhet.
One of the ambitions when designing the Stream Control Transmission Protocol was to offer a robust transfer of traffic between hosts. For this reason SCTP was designed to support multihoming, which presumes the possibility to set up several paths between the same hosts in the same session. If the primary path between a source machine and a destination machine breaks down, the traffic may still be sent to the destination, by utilizing one of the alternate paths. The failover that occurs when changing path is to be transparent to the application.
This paper describes the results from experiments concerning SCTP failover performance, which means the time between occurrence of a break on the primary path until the traffic is run smoothly on the alternate path. The experiments are performed mainly to verify the Linux Kernel implementation of SCTP (LK-SCTP) and is run on the Emulab platform. The results will serve as a basis for further experiments.
The experiments are performed in a network without concurrent traffic and in conclusion the results from the experiments correspond well to the values found in other studies and they are close to the theoretical best values. As expected the parameter Path.Max.Retrans has a great impact on the failover time. One observation is that the failover time and the max transfer time for a message are dependent upon the status in the network when the break of the primary path occurs.
The Stream Control Transmission Protocol (SCTP) has not only been selected as the signaling transport protocol of choice in IETF SIGTRAN, the architecture that bridges circuit-switched and IP-based mobile core networks, but also plays a pivotal role in SAE/LTE, the next-generation UMTS/HSPA networks. To meet the redundancy requirements of telecom signaling traffic, SCTP includes a failover mechanism that enables rerouting of traffic from an unreachable network path to a backup path. However, the recommendations provided by IETF on how to configure the SCTP failover mechanism to meet telecom signaling requirements are kept quite general and leave much of the tuning to the telecom equipment vendor and/or operator. Several works by us and others have been carried out to study the effect of different SCTP parameters on the failover performance. The main contribution of this paper is that it gives a coherent treatment of how to configure the SCTP failover mechanism for carrier-grade telephony signaling, and provides practically usable configuration recommendations. The paper also discusses an alternate or complementary way of optimizing the SCTP failover mechanism by relaxing the exponential backoff that foregoes a retransmission timeout in SCTP. Some results showing significantly reduced failover times by use of this mechanism, with only marginal deteriorating effects on a signaling network, are discussed and analyzed in the paper.