The NEAT system was developed in 2017 to increase flexibility in the choice of network transport protocol being used. One of the most important components of the NEAT system is the Policy Manager (PM), which determines what protocol is to be utilized by the application. The PM is written in Python while the rest of the NEAT system is C-based, so a natural evolution of the PM is to perform a functional translation of it to C. While the main goal was solely to develop a fully functional C-based PM, the difference in programming languages in the end also brought a performance increase of 28 times compared with the Python-based PM. There are still a few improvements left to do in the PM, but it is already a notable improvement for the NEAT system as a whole.
The number of devices and internet traffic for applications connected to the internet increases continuously. Devices provide increasing support for multi-homing and can utilize different access networks for end-to-end communication. The simultaneous use of multiple access networks can increase end-to-end performance by aggregating capacities from multiple disjoint networks by exploiting multipath communication. However, at this current point in time, multipath compatible transport layer protocols or multipath support at lower layers of the network stack have not seen widespread adaptation. Tunneled transport layer access bundling is an approach that allows for all types of single-path resources to exploit multipath communication by tunneling data over a Virtual Private Network (VPN) with transparent entry points on the User Equipment (UE) and on the internet. Commonly, such adaptation utilizes a single queue to buffer incoming packets which pose problems with fair multiplexing between concurrent application flows while being susceptible to bufferbloat. We designed and implemented extensions to Pluganized QUIC (PQUIC) which enables Flow Queuing Controlled Delay (FQ-CoDel) as a queueing discipline in tunneled transport layer access bundling to investigate if it is possible to achieve fair multiplexing between application flows while mitigating bufferbloat at the transport layer. An evaluation in the network emulator, mininet, shows that FQ-CoDel can add mechanisms for an instant, constant, and fair access to the VPN while significantly lowering the end-to-end latency for tunneled application flows. Furthermore, the results indicate that packet schedulers, such as Lowest-RTT-First (LowRTT) that adapt to current network characteristics, upholds the performance over heterogeneous networks while keeping the benefits of FQ-CoDel.
The problem addressed concerns the determination of the average numberof successive attempts of guessing a word of a certain length consisting of letters withgiven probabilities of occurrence. Both first- and second-order approximations to a naturallanguage are considered. The guessing strategy used is guessing words in decreasing orderof probability. When word and alphabet sizes are large, approximations are necessary inorder to estimate the number of guesses. Several kinds of approximations are discusseddemonstrating moderate requirements regarding both memory and central processing unit(CPU) time. When considering realistic sizes of alphabets and words (100), the numberof guesses can be estimated within minutes with reasonable accuracy (a few percent) andmay therefore constitute an alternative to, e.g., various entropy expressions. For manyprobability distributions, the density of the logarithm of probability products is close to anormal distribution. For those cases, it is possible to derive an analytical expression for theaverage number of guesses. The proportion of guesses needed on average compared to thetotal number decreases almost exponentially with the word length. The leading term in anasymptotic expansion can be used to estimate the number of guesses for large word lengths.Comparisons with analytical lower bounds and entropy expressions are also provided.
The purpose of the work is to calculate accurate values of molecular properties of tetracyanoquinodimethane (TCNQ) and anions using the complete active space self-consistent field and complete active space second-order perturbation theory methods. The accuracy has been evaluated using several basis sets and active spaces. The calculated properties have, in many cases, been confirmed by experimental data (within parentheses), e.g., 9.54 eV (9.61 eV) and 3.36 eV (3.38 eV) for the ionization potential and electron affinity, respectively, of TCNQ; 3.12 eV (3.01 eV) and 3.54 eV (3.42 or 3.60 eV) for transition energies to the two lowest-lying excited singlet states of TCNQ; − 0.03, 0.46 and 1.44 eV (0, 0.5 and 1.4 eV) for electronic energies in electron attachment of TCNQ forming $$\hbox {TCNQ}^-$$; and 3.88 eV (3.71 eV) for the transition energy to the second lowest-lying excited singlet state of $$\hbox {TCNQ}^{2-}$$. Further, the calculations have brought insight into some experimental observations, e.g., the shape of the fluorescence spectrum of TCNQ at 3–4 eV.
Using the CASSCF/CASPT2 methodology the electronic transitions HOMO → LUMO, HOMO → LUMO+1, HOMO-1 → LUMO and HOMO-2 → LUMO are determined for C60.Comparison to experiment suggests an accuracy better than 0.3 eV. Some illustrative examples are (with experimental data within parentheses) the first excited state, 3T2g, at 1.54 eV (1.60 eV), the two lowest-lying 1T1u states (for spin- and symmetry-allowed transitions) at 3.09 eV (3.08 eV) and 3.19 eV (3.30 eV) and the lowest singlet excited states (1Gg, 1T1g, 1T2g , 1Hg) at [1.84, 1.95] eV (1.90 eV with mainly 1Gg and 1T1g and minor 1T2g character).
In the Swedish school the textbook is seen as a central teaching material in mathematics. In many classrooms individual, silent work in the textbook is particularly common during mathematics lessons. Silent work is often a large part of the lesson, although research shows that the teaching of mathematics should include variation and communication and that the pupils should at an early age be given the opportunity to understand the relation between mathematics and daily life.The teacher has a central role in choosing which teaching material are best suited for each teaching situation and each individual pupil. The teacher is also supposed to adapt instruction to individual's ability in a way that enhances the pupil´s curiosity, thus contributing to an increased inner motivation.The aim of this study was to get teachers' perspective on textbooks. In school mathematic the textbook is a common teaching material and because of that there were a questions about how well the textbook actually lives up to the teachers' needs.The chosen method was both qualitative and quantitative. A survey was sent out to mathematics teachers in grade 1-3. A simple survey with a reduced number of predetermined answers was supposed to attract many teachers to answer them. Unfortunately there was low participation among the respondents, which made it difficult to answer the research questions. As a complement a more in-depth survey was sent out to only a few respondents. The two different surveys were coordinated in the analysis.The result showed that teachers in mathematics in year 1-3 often use a textbook. A big part of them believe the textbook is a motivating factor for the pupils, and that younger pupils like to work with their textbook. The results also showed that there is a difference of opinion as to whether or not the textbook itself gives good opportunities to adaptive instruction, as well as concerning. How well it is aligned with their own understanding of the required knowledge goals for class 3. All teachers see their own teaching as varied and that the communicative aspect of the subject is important. The results also showed that the teachers that work without textbooks don't see any advantages with it, except as a help for their own planning.The conclusion is that the textbook is an important teaching material, not least as a support in one's own planning.Teachers in class 1-3 believe themselves to have a varied mathematics pedagogy and also that they have motivated pupils. The teachers agree on the importance to anchor mathematics in daily life since it is always there, around us.
An experiment in which the Clauser-Horne-Shimony-Holt inequality is maximally violated is self-testing (i.e., it certifies in a device-independent way both the state and the measurements). We prove that an experiment maximally violating Gisin's elegant Bell inequality is not similarly self-testing. The reason can be traced back to the problem of distinguishing an operator from its complex conjugate. We provide a complete and explicit characterization of all scenarios in which the elegant Bell inequality is maximally violated. This enables us to see exactly how the problem plays out.
The aim of this study is to examine how teachers view the ability to make and follow mathematical reasoning and how teachers' mathematical lessons can be organized to enable students to develop this ability. We have used the framework described by Herbert et al. (2015) for primary teachers' perceptions of mathematical reasoning. We have also created our own framework based on what research shows fosters students' matehematical reasoning ability and based on this made a deductive content analysis. Through semi-structured interviews 12 teachers in grades 2-3 gave their views on mathematical reasoning and how they organize their lessons to foster students' mathematical reasoning ability. The results show that teachers view reasoning as hard to define but that they still conduct lessons that make it possible for students to foster this ability. Furtthermore, the results show that the lessons the teachers conduct show a higher perception of mathematical reasoning than what they themselves express. Most of the teachers express that the mathematical textbook does not give students the possibility for mathematical reasoning. Some teachers mention the material Sluta räkna-serien by Ulla Öberg as especially effective to foster students' mathematical reasoning ability. What dominates the teachers' lessons when working with mathematical reasoning are problem solving, open tasks, working in pairs or groups and working with concrete material.
In this thesis, the main objective is to study the presence of Gibbs phenomenon and the Gibbs constant in Fourier-Legendre series. The occurrence of The Gibbs phenomenon is a well known consequence when approximating functions with Fourier series that have points of discontinuity. Consequently, the initial focus was to examine Fourier seriesand the occurrence of Gibbs phenomenon in this context. Next, we delve into Legendrepolynomials, showing their applicability to be expressed as a Fourier series due to theirorthogonality in [−1, 1]. We then continue to explore Gibbs phenomenon for Fourier-Legendre series. The findings proceeds to confirm the existence of the Gibbs phenomenon for Fourier-Legendre series, but most notebly, the values of the error seem to convergeto the same number as for Fourier series which is the Gibbs constant.
A general approach to Monte Carlo methods for Coulomb collisions is proposed. Its key idea is an approximation of Landau-Fokker-Planck (LFP) equations by Boltzmann equations of quasi-Maxwellian kind. High-frequency fields are included into consideration and comparison with the well-known results are given.
Digital tools are getting increasingly widespread in our everyday lives as well as in school. How-ever, according to the national agency for education in Sweden, digital tools are rarely used to develop the mathematics education. With a survey, I investigated how teachers integrate digital tools in mathematics education and if the teachers’ perceived knowledge affects the integration in some way. With help from the TPACK framework I analyzed the results from 97 respondents. The results showed that teachers who experience good knowledge tend to vary their teaching using digital tools more than teachers who appreciated their skills as inadequate. The most common area of use for digital tools in mathematics is skills training, class review and visual-ization. The study suggests that there is a need for education in how to use digital tools in the education in order to be able to integrate digital tools in a favorable manner.
Today, the customer has no automated method for finding and collecting broken links on their website. This is done manually or not at all.
This project has resulted in a practical product, that can be applied to the customer’s website. The aim of the product is to ease the work when collecting and maintaining broken links on the website. This will be achieved by gathering all broken links effectively, and place them in a separate list that at will can be exported by an administrator who will then fix these broken links.
The quality of the customer’s website will be higher, as all broken links will be easier to find and remove. This will ultimately give visitors a better experience.
The performance of Volt/VAr optimization has been significantly improved due to the integration of measurement data obtained from the advanced metering infrastructure of a smart grid. However, most of the existing works lack: 1) realistic unbalanced multi-phase distribution system modeling; 2) scalability of the Volt/VAr algorithm for larger test system; and 3) ability to handle gross errors and noise in data processing. In this paper, we consider realistic distribution system models that include unbalanced loadings and multi-phased feeders and the presence of gross errors such as communication errors and device malfunction, as well as random noise. At the core of the optimization process is an intelligent particle swarm optimization-based technique that is parallelized using high performance computing technique to solve Volt/VAr-based power loss minimization problem. Extensive experiments covering the different aspects of the proposed framework show significant improvement over existing Volt/VAr approaches in terms of both the accuracy and scalability on IEEE 123 node and a larger IEEE 8500 node benchmark test systems.
Having smart and autonomous earthmoving in mind, we explore high-performance wheel loading in a simulated environment. This paper introduces a wheel loader simulator that combines contacting 3D multibody dynamics with a hybrid continuum-particle terrain model, supporting realistic digging forces and soil displacements at real-time performance. A total of 270,000 simulations are run with different loading actions, pile slopes, and soil to analyze how they affect the loading performance. The results suggest that the preferred digging actions should preserve and exploit a steep pile slope. High digging speed favors high productivity, while energy-efficient loading requires a lower dig speed.
In the quest for the development of faster and more reliable technologies, the ability to control the propagation, confinement, and emission of light has become crucial. The design of guide mode resonators and perfect absorbers has proven to be of fundamental importance. In this project, we consider the shape optimization of a periodic dielectric slab aiming at efficient directional routing of light to reproduce similar features of a guide mode resonator. For this, the design objective is to maximize the routing efficiency of an incoming wave. That is, the goal is to promote wave propagation along the periodic slab. A Helmholtz problem with a piecewise constant and periodic refractive index medium models the wave propagation, and an accurate Robin-to-Robin map models an exterior domain. We propose an optimal design strategy that consists of representing the dielectric interface by a finite Fourier formula and using its coefficients as the design variables. Moreover, we use a high order finite element (FE) discretization combined with a bilinear Transfinite Interpolation formula. This setting admits explicit differentiation with respect to the design variables, from where an exact discrete adjoint method computes the sensitivities. We show in detail how the sensitivities are obtained in the quasi-periodic discrete setting. The design strategy employs gradient-based numerical optimization, which consists of a BFGS quasi-Newton method with backtracking line search. As a test case example, we present results for the optimization of a so-called single port perfect absorber. We test our strategy for a variety of incoming wave angles and different polarizations. In all cases, we efficiently reach designs featuring high routing efficiencies that satisfy the required criteria.
In this project, we consider the shape optimization of a dielectric scatterer aiming at efficient directional routing of light. In the studied setting, light interacts with a penetrable scatterer with dimension comparable to the wavelength of an incoming planar wave. The design objective is to maximize the scattering efficiency inside a target angle window. For this, a Helmholtz problem with a piecewise constant refractive index medium models the wave propagation, and an accurate Dirichlet-to-Neumann map models an exterior domain. The strategy consists of using a high-order finite element (FE) discretization combined with gradient-based numerical optimization. The latter consists of a quasi-Newton (BFGS) with backtracking line search. A discrete adjoint method is used to compute the sensitivities with respect to the design variables. Particularly, for the FE representation of the curved shape, we use a bilinear transfinite interpolation formula, which admits explicit differentiation with respect to the design variables. We exploit this fact and show in detail how sensitivities are obtained in the discrete setting. We test our strategy for a variety of target angles, different wave frequencies, and refractive indexes. In all cases, we efficiently reach designs featuring high scattering efficiencies that satisfy the required criteria.
Embedded systems are becoming more prevalent in today’s society, leading to higherdemands on lowering the production cost for companies and reducing the environ-mental impact. One type of product that uses embedded systems are object trackingsurveillance cameras. There are several ways a surveillance camera can track an object,one of which is by tracking a sound source. To achieve this, two different techniquesof sound localization will be implemented, investigated and evaluated. A prototypewas created with two microphones connected to a microcontroller on which the twosound localization techniques were deployed. Because the noise level in an environmentvaries between scenarios, an adaptable limit value was also implemented. Furthermore,we evaluated the algorithms with preliminary experiments to determine which of thealgorithms performed best in different scenarios. From the results, we can concludethat ILD generally performs better than ITD and that the algorithms can successfullyrun on cheap, off-the-shelf hardware. The algorithms could likely be run on a cheapermicrocontroller than the one which was used during the project.
We study the weak solvability of a system of coupled Allen–Cahn-like equations resembling cross-diffusion which arises as a model for the consolidation of saturated porous media. Besides using energy-like estimates, we cast the special structure of the system in the framework of the Leray–Schauder fixed-point principle and ensure in this way the local existence of strong solutions to a regularized version of our system. Furthermore, weak convergence techniques ensure the existence of weak solutions to the original consolidation problem. The uniqueness of global-in-time solutions is guaranteed in a particular case. Moreover, we use a finite difference scheme to show the negativity of the vector of solutions.
New and innovative technologies to improve the techniques that are already being used are constantly developing. This project was about evaluating if containers could be something for the IT company Tieto to use on their products in telecommunications. Container are portable, standalone, executable lightweight packages of software that also contains all it needs to run the software. Containers are a very hot topic right now and are a fast-growing technology. Tieto wanted an investigation of the technology and it would be carried out with certain requirements where the main requirement was to have a working and executable protocol stack in a container environment. In the investigation, a proof of concept was developed, proof of concept is a realization of a certain method or idea in order to demonstrate its feasibility. The proof of concept led to Tieto wanting additional experiments carried out on containers. The experiments investigated if equal performance could be achieved with containers compared to the method with virtual machine used by Tieto today. The experiment observed a small performance reduction of efficiency, but it also showed benefits such as higher flexibility. Further development of the container method could provide a just as good and equitable solution. The project can therefore be seen as successful whereas the proof of concept developed, and experiments carried out both points to that this new technology will be part of Tieto's product development in the future.
The fast growth of Internet traffic, the growingimportance of cellular accesses and the escalating competitionbetween content providers and network operators result in agrowing interest in improving network performance and userexperience. In terms of network transport, different solutionsranging from tuning TCP to installing middleboxes are applied.It turns out, however, that the practical results sometimes aredisappointing and we believe that poor testing is one of thereasons for this. Indeed, many cases in the literature limittesting to the simple and rare use case of a single file download,while common and complex use cases like web browsing oftenare ignored or modelled only by considering smaller files. Tofacilitate better testing, we present a set of metrics by whichthe complexity around web pages can be characterised andthe potential for different optimisations can be estimated. Wealso derive numerical values of these metrics for a small set ofpopular web pages and study similarities and differences betweenpages with the same kind of content (newspapers, e-commerceand video) and between pages designed for the same platform(computer and smartphone).
Auto-scaling of Web applications is an extensively investigated issue in cloud computing. To evaluate auto-scaling mechanisms, the cloud community is facing considerable challenges on either real cloud platforms or custom test-beds. Challenges include – but not limited to – deployment impediments, the complexity of setting parameters, and most importantly, the cost of hosting and testing Web applications on a massive scale. Hence, simulation is presently one of the most popular evaluation solutions to overcome these obstacles. Existing simulators, however, fail to provide support for hosting, deploying and subsequently auto-scaling of Web applications. In this paper, we introduce AutoScaleSim, which extends the existing CloudSim simulator, to support auto-scaling of Web applications in cloud environments in a customizable, extendable and scalable manner. Using AutoScaleSim, the cloud community can freely implement/evaluate policies for all four phases of auto-scaling mechanisms, that is, Monitoring, Analysis, Planning and Execution. AutoScaleSim can also be used for evaluating load balancing algorithms similarly. We conducted a set of experiments to validate and carefully evaluate the performance of AutoScaleSim in a real cloud platform, with a wide range of performance metrics.
Mobile wireless networks constitute an indispensable part of the global Internet, and with TCP the dominating transport protocol on the Internet, it is vital that TCP works equally well over these networks as over wired ones. This paper identifies the performance dependencies by analyzing the responsiveness of TCP NewReno and TCP CUBIC when subject to bandwidth variations related to movements in different directions. The presented evaluation complements previous studies on 4G mobile networks in two important ways: It primarily focuses on the behavior of the TCP congestion control in medium- to high-velocity mobility scenarios, and it not only considers the current 4G mobile networks, but also low latency configurations that move towards the overall potential delays in 5G networks. The paper suggests that while both CUBIC and NewReno give similar goodput in scenarios where the radio channel continuously degrades, CUBIC gives a significantly better goodput in scenarios where the radio channel quality continuously increases. This is due to CUBIC probing more aggressively for additional bandwidth. Important for the design of 5G networks, the obtained results also demonstrate that very low latencies are capable of equalizing the goodput performance of different congestion control algorithms. Only in low latency scenarios that combine both large fluctuations of available bandwidths and a mobility pattern in which the radio channel quality continuously increases can some performance differences be noticed.
Mobile internet usage has significantly raised over the last decade and it is expected to grow to almost 4 billion users by 2020. Even after the great effort dedicated to the improvement of the performance, there still exist unresolved questions and problems regarding the interaction between TCP and mobile broadband technologies such as LTE. This chapter collects the behavior of distinct TCP implementation under various network conditions in different LTE deployments including to which extent the performance of TCP is capable of adapting to the rapid variability of mobile networks under different network loads, with distinct flow types, during start-up phase and in mobile scenarios at different speeds. Loss-based algorithms tend to completely fill the queue, creating huge standing queues and inducing packet losses both under stillness and mobility circumstances. On the other side delay-based variants are capable of limiting the standing queue size and decreasing the amount of packets that are dropped in the eNodeB, but they are not able under some circumstances to reach the maximum capacity. Similarly, under mobility in which the radio conditions are more challenging for TCP, the loss-based TCP implementations offer better throughput and are able to better utilize available resources than the delay-based variants do. Finally, CUBIC under highly variable circumstances usually enters congestion avoidance phase prematurely, provoking a slower and longer start-up phase due to the use of Hybrid Slow-Start mechanism. Therefore, CUBIC is unable to efficiently utilize radio resources during shorter transmission sessions.
Nowadays, more than two billion people uses the mobile internet, and it is expected to rise to almost 4 billion by 2020. Still, there is a gap in the understanding of how TCP and its many variants work over LTE. To this end, this paper evaluates the extent to which five common TCP variants, CUBIC, NewReno, Westwood+, Illinois, and CAIA Delay Gradient (CDG), are able to utilise available radio resources under hard conditions, such as during start-up and in mobile scenarios at different speeds. The paper suggests that CUBIC, due to its Hybrid Slow- Start mechanism, enters congestion avoidance prematurely, and thus experiences a prolonged start-up phase, and is unable to efficiently utilise radio resources during shorter transmission sessions. Still, CUBIC, Illinois and NewReno, i.e., the loss-based TCP implementations, offer better throughput, and are able to better utilise available resources during mobility than Westwood+ and CDG – the delay-based variants do.
TCP BBR (Bottleneck Bandwidth and Round-trip propagation time) is a new TCP variant developed at Google, and which, as of this year, is fully deployed in Googles internal WANs and used by services such as Google.com and YouTube. In contrast to other commonly used TCP variants, TCP BBR is not loss-based but model-based: It builds a model of the network path between communicating nodes in terms of bottleneck bandwidth and minimum round-trip delay and tries to operate at the point where all available bandwidth is used and the round-trip delay is at minimum. Although, TCP BBR has indeed resulted in lower latency and a more efficient usage of bandwidth in fixed networks, its performance over cellular networks is less clear. This paper studies TCP BBR in live mobile networks and through emulations, and compares its performance with TCP NewReno and TCP CUBIC, two of the most commonly used TCP variants. The results from these studies suggest that in most cases TCP BBR outperforms both TCP NewReno and TCP CUBIC, however, not so when the available bandwidth is scarce. In these cases, TCP BBR provides longer file completion times than any of the other two studied TCP variants. Moreover, competing TCP BBR flows do not share the available bandwidth in a fair way, something which, for example, shows up when shorter TCP BBR flows struggle to get its fair share from longer ones.
With the advancement of technology in recent decades, repetitive and administrative tasks have increasingly been able to be automated. This leads to a better work environment for employees, who can spend more time and focus on their primary responsibilities. However, the technology is not fully utilized in some areas, such as manual time reporting, where employees have to navigate between different systems and keep track of the time they have spent on different tasks. Our task has been to use system integration to automate this process, and by doing so, contribute to a better work environment. Through the use of cloud services, we have created an integration where data can flow from one system to another, and be enriched with necessary information on the way. The result of our work is an automated solution that allows employees to report time directly in the case management system, where the information is sent to the cloud for enrichment and formatting before being sent to the ERP where time reporting is done.
A scalable, flexible and reliable Analytics service has become a requirement toward building efficient Fifth Generation (5G) experimental platforms that can support a suite of end-user experiments and verticals. Our paper presents the challenges that come with designing such a service-based Analytics component, and shows how we have used it in the context of open experimental platforms in the 5GENESIS project. Our Analytics service was designed both for enabling the efficient setup and configuration of the underlying platform, and also for ensuring that it provides useful insights into the experimentation Key Performance Indicators (KPIs) toward the end-user. Thus, Analytics proved to be a useful tool across several stages, starting from ensuring correct operation during the initial phases of the network setup and continuing into the normal day-to-day experimentation. Our experiments show how the tool was used in our setup and provide information on how to apply it to different environments. The Analytics component, designed as a set of microservices that serve several goals in the analytics workflow, is also provided as open source, being part of the Open5Genesis suite.
With the increasing demand for solar energy, the forecast of the PV station energy production has to be as precisely as possible. To make the prediction more robust, also correlated infor- mation about the weather can be added to the previous energy production of the PV station. This thesis is part of a project, which has the goal to build an energy marketplace for a smart energy grid between households. To make the decisions of the prosumer more accurate, a forecast for the PV station energy production has to be as accurate as possible. Because not every household or even some smart grids will contain a weather station, also interpolated weather information has to be considered. The objective of this work is the evaluation of the accuracy difference between precise weather information, located directly at the PV station and interpolated weather data.
The errors of the data were recorded due to misfunctions in the sensors and were cleared with the usage of winsorization. The unnecessary weather features have been detected with several feature selection methods. For the forecast of the energy production three established machine learning algorithms were used: Random Forest, LSTM and Facebook Prophet. For the com- parison of the performance different performance metrics were used. The validation of the three models was carried out by a walk-forward cross validation with unseen data. Further- more, for each of the two datasets one of the three machine learning model were trained. For the performance measurement i.e., the LSTM model trained on precise weather information also received the interpolated data as an input for the prediction and vice versa. As a conclu- sion, the Random Forest model performed better than the other two model types, with an av- erage normalized error of 0.15. Whereas the LSTM model received an error of 0.37 and the Prophet model 0.58. For the difference between interpolated and actual weather information the results prove, that the uncertainity in those variables also affects the prediction of the PV station energy outcome. The LSTM model MSE increased by 14 percent and the Random Forest results with an increasement of 16 percent. The end of the thesis includes a discussion about the results and possible tasks for future work takes place.
Prediction of solar power generation is important in order to optimize energy exchanges in future micro-grids that integrate a large amount of photovoltaics. However, an accurate prediction is difficult due to the uncertainty of weather phenomena that impact produced power. In this paper, we evaluate the impact of different clustering methods on the forecast accuracy for predicting hourly ahead solar power when using machine learning based prediction approaches trained on weather and generated power features. In particular, we compare clustering methods using clearness index and K-means clustering, where we use both euclidian distance and dynamic time-warping. For evaluating prediction accuracy, we develop and compare different prediction models for each of the clusters using production data from a swedish SmartGrid. We demonstrate that proper tuning of thresholds for the clearness index improves prediction accuracy by 20.19% but results in worse performance than using K-means with all weather features as input to the clustering.
For efficient energy exchanges in smart energy grids under the presence of renewables, predictions of energy production and consumption are required. For robust energy scheduling, prediction of uncertainty bounds of Photovoltaic (PV) power production and consumption is essential. In this paper, we apply several Machine Learning (ML) models that can predict the power generation of PV and consumption of households in a smart energy grid, while also assessing the uncertainty of their predictions by providing quantile values as uncertainty bounds. We evaluate our algorithms on a dataset from Swedish households having PV installations and battery storage. Our findings reveal that a Mean Absolute Error (MAE) of 16.12W for power production and 16.34W for consumption for a residential installation can be achieved with uncertainty bounds having quantile loss values below 5W. Furthermore, we show that the accuracy of the ML models can be affected by the characteristics of the household being studied. Different households may have different data distributions, which can cause prediction models to perform poorly when applied to untrained households. However, our study found that models built directly for individual homes, even when trained with smaller datasets, offer the best outcomes. This suggests that the development of personalized ML models may be a promising avenue for improving the accuracy of predictions in the future.
The purpose of the study was to find out how teachers in a school in Kenya conducted the education of mathematics in standard 4-6. The focus was how the teachers worked with children in need of special support in mathematics.To find out the purpose a case study was made with two interviews and seven observations with four teachers on a rural school in Kenya. The following issues were: Which standards were there in the classrooms during lessons in mathematics in Kenya? How did the teachers express the view of children in need of special support? The main conclusion was that different teachers teaching in mathematics looked the same.The teacher was standing in front of the blackboard where the teaching occurred and the pupils were sitting lined up in their benches. The teaching of the pupils was about repetition and imitates the teacher and mostly filling the gap that the teacher made them say. Pupils in need of special support got their help through extra lessons and homework but the teachers thought that it was hard to help every child when there was only one teacher in the classroom.
This thesis addresses the area of code anonymization in software development, with a focus on protecting sensitive source code in an increasingly digitized and AI-integrated world. The main problems that the thesis addresses are the technical and security challenges that arise when source code needs to be protected, while being accessible to AI-based analysis tools such as ChatGPT. This thesis presents the development of an application whose goal is to anonymize source code, in order to protect sensitive information while enabling safe interaction with AI. To solve these challenges, the Roslyn API has been used in combination with customized identification algorithms to analyze and process C# source code, ensuring a balance between anonymization and preservation of the code's functionality. The Roslyn API is part of Microsoft's .NET compiler platform that provides rich code analysis and transformation capabilities, enabling the transformation of C# source code into a detailed syntax tree for code structure inspection and manipulation.The results of the project show that the developed application successfully anonymizes variable, class, and method names, while maintaining the logical structure of the source code. Its integration with ChatGPT enhances the user experience by providing interactive dialogues for analysis and assistance, making it a valuable resource for developers. Future work includes extending the application to support more programming languages and developing customized configurations to further improve ease of use and efficiency.
We determine the exceptional sets of hypergeometric functions corresponding to the(2, 4, 6) triangle group by relating them to values of certain quaternionic modular formsat CM points. We prove a result on the number fields generated by exceptional values, and by using modular polynomials we explicitly compute some examples.
Recent developments in information technology such as the Internet of Things and the cloud computing paradigm enable public and private organisations to collect large amounts of data to employ various data analytic techniques for extracting important information that helps improve their businesses. Unfortunately, these benefits come with a high cost in terms of privacy exposures given the high sensitivity of the data that are usually processed at powerful third-party servers. Given the ever-increasing of data breaches, the serious damage they cause, and the need for compliance to the European General Data Protection Regulation (GDPR), these organisations look for secure and privacy-preserving data handling practices. During the workshop, we aimed at presenting an approach to the problem of user data protection and control, currently being developed in the scope of the PoSeID-on and PAPAYA H2020 European projects.
A recently proposed defense for the anonymity network Tor uses preload lists of domains to determine what should be cached in the Domain Name System (DNS) caches of Tor relays. The defense protects against attacks that infer what is cached in Tor relays. By having domains continuously cached (preloaded), the cache will become independent of which websites have been visited. The current preload lists contain useless domains and have room for improvement. The objective of this project is to answer the question of "How can we generate better preload lists?" and to provide improved methods for generating preload lists, with the ultimate goal of generating better preload lists that the Tor Project can benefit from.
We further developed existing tools to use web crawling to find more useful domains, as well as implementing filtering to remove useless domains from the preload lists. The results of this project showed promising results, as the useless domains decreased by an average of around 57% and more useful domains were found.
Reproducibility is one of the key characteristics of good science, but hard to achieve for experimental disciplines like Internet measurements and networked systems. This guide provides advice to researchers, particularly those new to the field, on designing experiments so that their work is more likely to be reproducible and to serve as a foundation for follow-on work by others.
Industry 4.0 and a global trend in digital transformation have brought new ideas and emerging technologies to the surface. Data has become a key asset for businesses, and streamlining data and automating data life cycles have become increasingly important. This industrial revolution is centered around cyber-physical systems, and it sets forth that new technologies will change how a business traditionally operates. However, the problem is a lack of tools, systems, and methods to realize this revolution. Thus, there is a strong demand for finding solutions that move businesses toward Industry 4.0. A new technology known as Digital Twin (DT) has emerged from this. This technology aims to improve the business value of big data by digitally representing physical entities. To operate successfully with this technology, other enabling technologies and tools are needed, providing DTs with high-quality data that accurately represent the system in which the twin models are used. This can be a problem as the data might originate from different sources and often do not follow the same format and standards. Furthermore, data must also be readily collected in a timely manner. To deal with problems such as these, a new term known as Data Operations (DataOps) has surfaced. DataOps is a set of practices and processes that aims to improve the communication, integration, and automation of data flow within data landscapes and organizations. This thesis introduces a methodology to investigate whether a standardized data logging tool can be used as a DataOps solution to collect, process, and make data available for DTs. This is done by investigating the current literature and applying testing methodologies to the tool. More specifically, a combination of load, performance, and stress tests are performed to assess the ability of the tool to collect large amounts of data. The focus is on investigating whether this can be done in a timely manner. It is concluded that the tool does posses features that are of importance for DataOps and DTs, and that it could be a viable option for data gathering to certain DTs on its own. However; as a result of internal mechanics of the tool, it is not timely enough for use as a DataOps solution in general. Further research regarding improvements of its timeliness, other similar tools, and testing in a real environment consisting of a real DT is proposed and motivated.
We prove that there exists a martingale f is an element of H-p such that the subsequence {L(2n)f} of Norlund logarithmic means with respect to the Walsh system are not bounded from the martingale Hardy spaces H-p to the space weak - L-p for 0 < p < 1. We also prove that for any f is an element of L-p, p >= 1, L-2n f converge to f at any Lebesgue point x. Moreover, some new related inequalities are derived.
In this paper, we introduce some new weighted maximal operators of the partial sums of the Walsh–Fourier series. We prove that for some “optimal” weights these new operators indeed are bounded from the martingale Hardy space Hp(G) to the Lebesgue space weak - Lp(G) , for 0 < p< 1. Moreover, we also prove sharpness of this result. As a consequence we obtain some new and well-known results.
We investigate the subsequence {t2n f} of Norlund means with respect to the Walsh system generated by nonincreasing and convex sequences. In particular, we prove that a large class of such summability methods are not bounded from the martingale Hardy spaces H-p to the space weak-Lp for 0 < p < 1/(1+ alpha), where 0 < alpha < 1. Moreover, some new related inequalities are derived. As applications, some well-known and new results are pointed out for well-known summability methods, especially for Norlund logarithmic means and Cesaro means.
In this paper, we derive the maximal subspace of natural numbers nk: k≥ 0 , such that the restricted maximal operator, defined by supk∈N|σnkF| on this subspace of Fejér means of Walsh–Fourier series is bounded from the martingale Hardy space H1 / 2 to the Lebesgue space L1 / 2. The sharpness of this result is also proved.
We prove and discuss some new weak type (1,1) inequalities of maximal operators of Vilenkin-Norlund means generated by monotone coefficients. Moreover, we use these results to prove a.e. convergence of such Vilenkin-Norlund means. As applications, both some well-known and new inequalities are pointed out.
In this paper we introduce some new weighted maximal operators of the partial sums of the Walsh-Fourier series. We prove that for some "optimal" weights these new operators indeed are bounded from the martingale Hardy space H-p(G) to the Lebesgue space L-p(G), for 0 < p < 1. Moreover, we also prove sharpness of this result. As a consequence we obtain some new and well-known results.
A central component of managing risks in cloud computing is to understand the nature of security threats. The relevance of security concerns are evidenced by the efforts from both the academic community and technological organizations such as NIST, ENISA and CSA, to investigate security threats and vulnerabilities related to cloud systems. Provisioning secure virtual networks (SVNs) in a multi-tenant environment is a fundamental aspect to ensure trust in public cloud systems and to encourage their adoption. However, comparing existing SVN-oriented solutions is a difficult task due to the lack of studies summarizing the main concerns of network virtualization and providing a comprehensive list of threats those solutions should cover. To address this issue, this paper presents a threat classification for cloud networking, describing threat categories and attack scenarios that should be taken into account when designing, comparing, or categorizing solutions. The classification is based o n the CSA threat report, building upon studies and surveys from the specialized literature to extend the CSA list of threats and to allow a more detailed analysis of cloud network virtualization issues.