Listen to this article now

Abstract

There have been several shifts in the cloud computing during the last decade. The next generation of cloud computing builds on the strengths of the current model while expanding its applicability. There will be far-reaching effects from the evolving cloud infrastructure and new computing architecture. They will be critical in facilitating the Internet-of-Things paradigm by enhancing connection between humans and IoT devices. The first purpose of this research is to review and discuss the next generation computing architectures, such as, Fog, Mobile Edge, Serverless, And Software-Defined Computing. Organizations have turned to cloud adoption as a way to increase the scalability of their Internet-based database capabilities with little outlay of resources. Cloud adoption is a deliberate decision made by businesses to reduce costs, mitigate risk, and achieve expansion of data base abilities. Depending on the amount of adoption, an organization may have varying degrees of cloud adoption. The second purpose of this is to investigate the adoption strategies of next generation cloud computing. We applied two meta learning algorithms, namely, Ensemble Voting voting and Stacking classifiers. Our results shows that most organizations with low levels of IT competence and high levels of perceived challenges are not planning to use the next generation cloud computing in the near future. Most organizations with a modest view of risk and IT competence are undecided about whether or not to adopt. Even organizations that have access to cutting-edge technology and a low level of concern about the potential challenges have mixed feelings about next generation cloud computing. Results from both classifier algorithms are almost comparable, validating the empirical findings.

Introduction

Traditional computing needs users to execute their applications via a server or computer that is physically near to them, such as in the same building. Cloud computing is based on the same concept, except all actions take place in ‘the cloud, or the internet [1]. Historically, the term ‘cloud’ has been used as a metaphor for the Internet. Many businesses depend on this innovation to run their operations and utilize it as the foundation of their IT infrastructure.

This contemporary day cloud has swept the internet by storm and permanently transformed the way organizations operate. Because no on-site data center is necessary when using cloud computing in the workplace, capital expenditures may be reduced to a minimum or altogether eliminated [2]. The amount of energy consumed to power and cool the servers is reduced, which is rather environmentally beneficial. The money saved on capital expenditures, physical data center setup and maintenance may be spent elsewhere, giving the company more time to concentrate on their business model.

Resources in the cloud may be readily saved, retrieved, recovered, or processed. Users may access their work on the move, on any device of their choosing, from anywhere in the globe as long as they are linked to the internet. Furthermore, all updates and modifications are performed automatically and off-site by service providers, saving money and time in system maintenance and significantly reducing IT staff burdens [3], [4].

There are many changes in adopting the next generation cloud computing. Some common challenges are IT capabilities and processes. Today’s businesses employ people of many ages and technical ability levels. Businesses should keep training staff in many departments on how to utilize the cloud and reduce everyday operational issues in addition to security training [5], [6]. Moreover, businesses must become skilled at using numerous integrated services. To respond to demand and improve procedures, IT should be equipped with sufficient tools to migrate data to alternative service providers.

Edge computing is a connectivity principle that focuses on placing computing as near to the data source as feasible in order to decrease latency and bandwidth consumption [7]. On plain terms, edge computing is executing less operations in the cloud and relocating them to local locations, such as a user’s computer, an IoT item, or perhaps an edge server. By bringing processing to the network’s edge, the quantity of long-distance transmission between an user and server is reduced [8].

Low latency, an emphasis on storage, as well as real-time insights are the advantages of fog computing, which is related to edge computing. Edge computing is especially focused on conducting compute tasks near to end-users from outside center of the network, while fog computing would be the general architecture of dispersing resources around the network [9], [10].

Multi-Access Edge Computing (MEC) shifts traffic as well as service computing from a centralized cloud to the network’s edge, closer to the client [11]. The network edge examines, integrates, and stores data rather than sending it all to the cloud for processing. Gathering and processing information closer to the user minimizes latency and gives high- bandwidth services real-time performance [12].

In comparison to typical cloud-based or server-centric architecture, serverless computing has many benefits [13]. Serverless architectures provide higher scalability, increased flexibility, and faster time to deploy for many developers, all at a lower cost. Developers

do not have to worry about procuring, provisioning, or maintaining backend servers with serverless frameworks [14], [15].

Software defined architectures have recently become popular in the world of digital technologies. It is a new way of looking at both hardware and software, made possible in large part by low-cost, high-performance technology and virtualization technologies [16], [17]. This sector has given rise to technologies such as softwaredefined networking, softwaredefined storage, softwaredefined computing, and software-defined data centers [18].

When adopting software-defined compute, computational functions may be distributed among any range of processing units, and the burden can be distributed between them rather than being allocated to a single device [19], [20]. Furthermore, based on the available resources, the compute functions may be shifted around to other parts of virtual infrastructure without having to physically alter the hardware.

The Next generation computing architecture

Fog and Mobile Edge Computing 

There are many devices and sensors, ranging from simple consumer gadgets to more complicated systems. By the next decade, the volume of Internet-of-Things (IoT) endpoints is expected to reach around 40 billion. These linked devices, which range from individual gadgets to more complicated systems like as cars and power grids, have sensing, actuation, transmission, processing, and storage capacity and create massive volumes of data of different forms [21], [22]. However, it is difficult to run a large number of diverse IoT devices while remaining performance-efficient in real-time. Data produced by IoT devices is often transported to and managed centrally by services stored on clouds located in different geographical areas Given the connection delay and input bandwidth need, this is unsustainable [23], [24].

Figure 1. MEC architecture

A new and revolutionary approach is taking form, led by researchers and industry professionals, to enable applications to use resources situated at the network’s edges and along the spectrum between the clouds and the edges. These edge resource may be closer to Internet of things devices, including such home networks, gateways, or larger micro data centers, spatially or in the network architecture. Edge resources could be utilized to offload chosen cloud services to speed up a program or to run edge-native apps.

The paradigm used to harness the edge is known as “Fog/Edge computing” [25]. Many thinks of fog and MEC as extensions of the cloud, with resources located at the network’s edge and hence near to the end customer. In recent years, the words Fog as well as MEC have sparked increased attention in academics and business because to the significant role they may play in addressing several situations where standard cloud fails, particularly in terms of service quality.

MEC are cloud services that operate at the network’s edge and execute certain operations in real-time or near-real-time that would normally be performed in central core or cloud architectures. MEC brings computing capacity closer to the end user, enabling applications and services that need specific connection features such as ultra-low latencies. It enables the acceleration of information, services, and applications by boosting their responsiveness.

The Fog/Edge computing model is projected to increase service deployment speed, enable the use of opportunistic and low-cost computing, and capitalize on network latencies as well as bandwidth diversities across these resources. When employing edge resources, several issues occur, necessitating a rechecking of operating systems, virtualization and containers, and fabric management middleware solutions. To enable developers to construct unique applications that may profit from large – scale distributed & data-driven edge systems, new abstractions and enhancements to conventional programming and storage paradigms are required. Addressing edge resource security, privacy, and trust is critical for administering the assets and context of mobile, transitory, and hardware- constrained resources. Edge and 5G integration will also offer new possibilities and problems [12]. For many applications, allowing machine learning at the edge is crucial. Finally, emergent fields like as driverless cars and intelligent health need fog and edge technologies [26].

Many sectors find the following advantages from these technologies: 1. They are critical facilitators of IoT infrastructures by providing the required processing and storage capacity, which are often limited in IoT devices. They enable cloud services to be nearer to data sources, reducing latency and increasing performance. 3. They improve safety and confidentiality by locating security checks near sources of data or even on company premises. Because Fog and MEC are dispersed, businesses may deploy specialized and context-aware services while retaining high levels of extensibility, compatibility, and effective (de-)allocation of assets.

Serverless computing 

The term “serverless computing” refers to an implementation approach for the cloud where a cloud vendor dynamically allots the computing resources and storage space required to carry out the instructions for a specific line of code, and then bills the customer for those resources. There remain servers connected, but their deployment and administration are handled completely by the provider.

It is critical to grasp the notion of serverless computing since servers are still executing the code. The term “serverless” refers to the notion that infrastructure provisioning and administration duties are hidden from the developer. This technique allows developers to concentrate more on enterprise logic and provide more utility to the core of the organization. Serverless computing enables teams to be more productive and deliver innovations to market quicker, while also allowing enterprises to better utilize resources and concentrate on innovation [13].

Figure 2. Serverless as compared to traditional

Developers never incur for unused capacity while using serverless. When the code runs, the cloud vendor spins up and supplies the necessary computing resources as needed and then spins them back down—a process known as ‘scaling to zero’—when execution ends. Billing begins when execution begins and finishes when execution concludes; price is often dependent on executing time and resources needed [27].

There are many benefits of serverless computing. First, There is no need for server administration. Although serverless computing does occur on servers, developers are never need to interact with the servers. The seller is in charge of them. This may minimize the amount of money spent on DevOps, allowing developers to construct and enhance their apps without being limited by server capacity [28].

Secondly, developers are only billed for the space they utilize, which saves money. Developers are only billed for their use, similar to a ‘pay-as-you-go’ phone plan. The code is only executed when the serverless application requires backend functionalities, and it dynamically scales up as required. Provisioning is real-time, accurate, and dynamic. Some systems are so precise that they charge in 100-millisecond levels. With contrast, in a traditional server-full design, developers must forecast how much server space they will require and then acquire that capacity, whether or not they use it.

Thirdly, scalability is inherent in serverless designs. Applications created with a serverless architecture will automatically scale as the user base or consumption expands. If a function must execute in several versions, the provider’s servers will launch, run, and stop them as required, often utilizing containers. (If the function was recently performed, it will start up faster. As a consequence, a serverless application may manage an exceptionally large volume of requests just as effectively as a single query from a particular user [29]. A typically constructed program with a set quantity of server capacity might be overburdened by an unexpected rise in traffic.

Moreover, it is feasible to deploy and upgrade quickly. It is not necessary to submit code to servers or perform any backend setup when using a serverless architecture to release a functioning edition of an application. Developers can rapidly submit code and launch a new product. Because the program is not a single entity structure but rather a set of functions provided by the vendor, they may upload code at a time or one unit at a time. This also allows for the rapid update, patch, correction, or addition of new functionality to a program. Changes to the whole program are not required; instead, developers may upgrade the application single function at a time.

Finally, Code may be executed near the end user, reducing latency. Because the program is not placed on a central server, its code may be executed from any location. As a result, depending on the manufacturer, application functionalities may be executed on servers near to the end customer. This decreases latency since user queries are no longer routed to a main server.

With so much to value in serverless, businesses are using it in a broad range of applications. However, there are several limitations, some of which are particular to specific applications and others that are universal. Undesirable latency for some applications: Since serverless architectures avoid continuing processes in preference for scaling up and back to 0, they must sometimes restart from zero to service a new request. For many applications, the resulting delay might be insignificant or even undetectable to consumers. However, for certain applications, such as financial services, this cold-start latency may be intolerable.

Serverless delivers considerable cost reductions for spiky workloads since it grows up and back on request in accordance to workload. However, it may not provide the similar savings for workloads with predictable, stable, or lengthy processes; in these circumstances, a typical server setup may be easier and more cost-effective.

Tracking and troubleshooting are difficult chores in most distributed structure, but serverless design (or microservices structure, or a mix of the two) simply adds to the complexity. For example, using current tools or methods, teams might find it challenging or unattainable to inspect or troubleshoot serverless operations [30]. One of the most significant benefits of serverless is the fact that cloud vendor maintains all computing resources. While this gives developers more time to concentrate on building and enhancing code, it also implies that moving code to a different cloud provider may be difficult. Many cloud providers’ serverless platforms are still not transportable such as virtual machines (VMs) and Docker containers and are meant to offer an environment of managed cloud services [31]. To get the same outcomes on another provider’s platform, application code that activates various services supplied by one cloud provider’s serverless platform may need to be partly or totally changed.

Software defined computing 

Software-defined computing (SDC) is described as an unified abstraction layer which provides computing architecture as collections of virtual as well as physical resources that users may dynamically build into services [16], [18]. It is now achievable due to new datacentre solutions that aim to reduce human installation and setup. By allowing the corporation to reap the advantages of datacentre abstraction, organizations can react to business surges more quickly.

Figure 3. Software defined computing

One firm’s mobile app, for example, can generate a few times the planned transaction volume. Only software-defined computing enables to install and update new apps at the speed required to cope with these scenarios. SDN concepts are used in software-defined computation, in which virtual network layers are constructed over physical resources and multiple network planes are isolated [32]. The SDN controller, which interacts across the multiple levels of the networking planes and also is utilized in software-defined computing to link the virtual to the physical, is a key component of SDN architecture [33]. Management of computing resources may be transferred to a centralized interface using software-defined compute.

According to the cloud computing literature, there are some value-driven advantages of SDC. The most significant advantage of software defining the part of the datacenter seems to be flexibility. SDC clearly demonstrates this. A dedicated resource setting is referred to when we use a standard bare metal server. Physical built-in specified levels of memory, CPU, and disk storage are assigned to the server. Although this paradigm, which has serviced data centers for several years, has significant flaws.

Organizations may combine existing server fleets and obtain cost savings by leveraging computational resources like memory and CPU. Today’s host servers may easily support. Companies with even higher consolidation percentages than this are not unusual. Reduced exposed metal equals reduced rack space, energy consumption, and cooling requirements.

It entails automation and program automatically transferring VMs to prevent an expected outage in a completely flawless operation with almost no effect on users. It also involves failing over VMs as a result of a host server breakdown and needing all of the VMs relocated and operational without disruption before the network supervisor is notified of the failure [34]. It entails regularly assessing performance as well as traffic loads and implementing the required actions to achieve optimal load balancing across hosts and allotted resource pools. It is about automatically providing more resources as required and retiring VMs when the network is idle to save electricity. It entails letting the software handle the day-to-day operations so the IT personnel can focus on value-added ideas and initiatives [35].

Network administrators may expedite many management operations by streamlining the data center and making it more readily scalable by using software-defined computing and virtualization. Because hardware components are often generic and industry-standard, they may be quickly added to meet demand. This enables network administrators to construct a dynamic, scalable resource pool managed by a software-defined cloud service.

Methods

Based on the preceding section’s explanation, this study splits adoption drivers into 2 groups: a) technical combability and b) perceived dangers. To investigate the trend of next generation cloud computing adoption in businesses, we used two machine learning classifier methods. The information was compiled by 148 IT specialists from various institutions.

Ensemble Voting Classifier

The Ensemble Voting Classifier algorithm is a meta-classifier for combining machine learning models that are related or conceptually different for majority or plurality categorization.

Both “hard” and “soft” voting are supported by the Ensemble Voting Classifier. In hard voting, we expect the final classifier as the most often projected class label by the classification techniques. In soft voting, we predict the class labels by combining the class likelihood [36]–[38].

Figure 4. Meta classifiers (Ensemble and Stacking)

The most basic kind of majority voting is called “hard voting.” Here, we make a prediction

for the category label yˆ by averaging the results of all classifiers A:

yˆ = mode{A1 (x), A2 (x),…, Am (x)}

Let assume combining three classifiers that do the following using a training sample:

  • classifier 1 -> class 0
  • classifier 2 -> class 0
  • classifier 3 -> class 1

yˆ = mode{0, 0,1} = 0

By popular vote, we can label this subset “class 0.” By assigning a weight wj to the classifier
Aj , a weighted majority vote can be calculated in addition to the basic majority vote discussed above:

yˆ = arg max åwj cL ( Aj (x) = i),

i       j =1

where L is the set of distinct class labels, and cL is the characteristic function [ Aj (x) = iÎ L].

Continuing with the previous section’s example

  • classifier 1 -> class 0
  • classifier 2 -> class 0
  • classifier 3 -> class 1

with weights {0.2, 0.2, 0.6} we get yˆ = 1:

arg max[0.2 ´ i0 + 0.2 ´ i0 + 0.6 ´ i1 ] = 1

Soft voting involves making predictions about the class labels using the classifier’s predicted probabilities p; this tactic is recommended only if the classifiers are properly tuned.

yˆ = arg max åwj pij ,

i        j =1

Where, wj is the weight to the j th classifier.

Stacking Classifier

The final estimate produced by a stacking classifier may be as as or even better than the optimum estimation technique in the base layer since it is based on the outputs of the individual estimators used in the stack. Specifically, it employs a meta-learning strategy to determine the most effective means by which to combine predictions obtained from several base machine learning approaches [39]. When applied to a classification task, the stacking classifier method has the potential to combine the strengths of many high-performing methods to provide predictions that exceed those of any one algorithm in the ensemble. Each classifier technique in the ensemble is trained on the whole dataset, and the ensemble as a whole is then “fitted” using the individual models’ outputs (meta-features) [40]. In order to train the meta-classifier, one may use either expected class labels or ensemble probability.

We classify the dependent variable into three categories. No intention to adopt Next Generation Cloud Computing= 0, Undecided = 1 Intend to adopt = 2

Results

Figure 5. Ensemble learning results

The decision bounds for the ensemble voting algorithm are illustrated in Figure 5. it indicates that there are 3 properly categorized areas based on the ambition to utilize the next generation cloud computing. The IT workers that presently has no plan to implement the next generation cloud computing dwells in the grey zone positioned on the right of the figure 5. The IT experts who are still unsure about using the next generation cloud computing are largely in the dark middle zone. Those that aim to utilize the next generation cloud computing are on the left zone. The y-axis reflects the perceived challenges connected with the use of next generation cloud computing, and the x-axis represent the IT competence of the organizations. It can be noted that the majority of institutions with limited technological compatibility and significant perceived challenges has presently no desire to implement next generation cloud computing. The organizations with milder risk perceptions and moderate technical combability are usually indecisive whether to incorporate the next generation cloud computing.

Figure 6. Stacking classifier results

The facilities with high technical combability and low perceived risk are either uncertain or else are willing to embrace the next generation cloud computing approaches. The stacking classifier yielded virtually comparable findings, confirming the validity of our empirical results. Nevertheless, according to table 5, and 6, the Ensemble voting classifier has a greater accuracy (0.80) than stacking classifier with accuracy of 0.74.

Conclusion

Cloud computing transformed the information technology (IT) sector by enabling dynamic and unlimited growth, on-demand resources, and utility-based consumption. Current changes in user traffic and needs, on the other hand, have shown the inadequacies of cloud computing, specifically the inability to give real-time replies and manage enormous surges in data quantities.

Today’s corporate enterprises recognize that no one cloud or infrastructure solution can fulfill the demands of all of their apps and use-cases. As the limitations of adopting serverless are resolved and the adoption of edge computing develops, we should anticipate serverless architecture to become more common. Serverless computing is evolving as serverless companies develop answers to some of its disadvantages. There are times when it sounds better, both financially and in terms of system design, to employ dedicated servers

which are maybe self-managed or supplied as a service. Larger applications with a reasonably steady, known workload, for example, may need a conventional arrangement, which is likely to be less costly in such instances.

Furthermore, migrating old programs to a newer platform with an altogether different design may be extremely complex. Therefore, the overwhelming of technology users are startups looking for a way to expand smoothly and decrease the entry barrier. Serverless is also an excellent option for applications which do not operate constantly but instead experience quiet intervals and traffic surges.

One of the primary benefits of software-defined computing is that it allows infrastructure to grow more adaptable, allowing it to serve several clients or users at the same time. The infrastructure may be divided into resources which can be supplied on demand using virtualization technologies. This has been a defining feature of the ongoing cloud computing revolution.

Multi-access Edge Computing (MEC) provides cloud computing services and an IT servicescape at the network’s edge to developers and content producers. This environment is distinguished by low latency as well as high bandwidth, along with real-time accessibility to radio network data that programs may use.

The IoT devices, today’s 4G, and 5G networks, are driving forces for edge computing. Because of the exponential rise in traffic, particularly video, and the expansion of connected devices, network infrastructure will be required to scale well in order to transmit increasing amounts of data. To address these needs, MEC delivers the mobility and adaptability of the cloud nearer to the client.

While there are many advantages to moving to the cloud, including cost savings and improved workflow, it is important for businesses to carefully evaluate their service providers, security risks, and continuing procedural hurdles to ensure they are providing a safe and secure environment for their employees and clients.

References

[1] H. F. Cervone, “An overview of virtual and cloud computing,” OCLC Systems & Services: International digital library perspectives, vol. 26, no. 3, pp. 162–165, Jan. 2010.

[2] Marston, Li, Bandyopadhyay, and Zhang, “Cloud computing—The business

perspective,” Decis. Support Syst., 2011.

[3] H. Saini, A. Upadhyaya, and M. K. Khandelwal, “Benefits of Cloud Computing for

Business Enterprises: A Review,” in Computing & …, 03-Oct-2019.

[4] F. Etro, “The economic impact of cloud computing on business creation,

employment and output in Europe,” Review of Business and Economics, 2009.

[5] W. Kim, “Cloud computing: Today and tomorrow,” J. Object Technol., 2009.

[6] D. Assante, M. Castro, I. Hamburg, and S. Martin, “The Use of Cloud Computing in SMEs,” Procedia Comput. Sci., vol. 83, pp. 1207–1212, Jan. 2016.

[7] M. Satyanarayanan, “The Emergence of Edge Computing,” Computer , vol. 50, no. 1, pp. 30–39, Jan. 2017.

[8] W. Shi and S. Dustdar, “The Promise of Edge Computing,” Computer , vol. 49, no. 5, pp. 78–81, May 2016.

[9] W. Shi, J. Cao, Q. Zhang, Y. Li, and L. Xu, “Edge Computing: Vision and Challenges,”

IEEE Internet of Things Journal, vol. 3, no. 5, pp. 637–646, Oct. 2016.

[10] Y. Mao, C. You, J. Zhang, K. Huang, and K. B. Letaief, “A Survey on Mobile Edge Computing: The Communication Perspective,” IEEE Communications Surveys & Tutorials, vol. 19, no. 4, pp. 2322–2358, Fourthquarter 2017.

[11] M. Mehrabi, D. You, V. Latzko, H. Salah, M. Reisslein, and F. H. P. Fitzek, “Device- Enhanced MEC: Multi-Access Edge Computing (MEC) Aided by End Device Computation and Caching: A Survey,” IEEE Access, vol. 7, pp. 166079–166108, 2019.

[12] T. Taleb, K. Samdanis, B. Mada, H. Flinck, S. Dutta, and D. Sabella, “On Multi-Access Edge Computing: A Survey of the Emerging 5G Network Edge Cloud Architecture and Orchestration,” IEEE Communications Surveys & Tutorials, vol. 19, no. 3, pp. 1657–1681, thirdquarter 2017.

[13] I. Baldini, P. Castro, K. Chang, P. Cheng, and S. Fink, “Serverless computing: Current trends and open problems,” in cloud computing, 2017.

[14] E. Jonas, J. Schleier-Smith, V. Sreekanti, and C. C. Tsai, “Cloud programming simplified: A berkeley view on serverless computing,” arXiv preprint arXiv, 2019.

[15] P. Castro, V. Ishakian, V. Muthusamy, and A. Slominski, “The rise of serverless computing,” Commun. ACM, vol. 62, no. 12, pp. 44–54, Nov. 2019.

[16] R. Buyya, R. N. Calheiros, J. Son, A. V. Dastjerdi, and Y. Yoon, “Software-Defined Cloud Computing: Architectural elements and open challenges,” in 2014 International Conference on Advances in Computing, Communications and Informatics (ICACCI), 2014, pp. 1–12.

[17] R. Jain and S. Paul, “Network virtualization and software defined networking for cloud computing: a survey,” IEEE Commun. Mag., vol. 51, no. 11, pp. 24–31, Nov. 2013.

[18] H. Mei, “Understanding ‘software-defined’ from an OS perspective: technical

challenges and research issues,” Sci. China Inf. Sci., vol. 60, no. 12, Dec. 2017.

[19] Q. Yan and F. R. Yu, “Distributed denial of service attacks in software-defined

networking with cloud computing,” IEEE Commun. Mag., vol. 53, no. 4, pp. 52–59, Apr. 2015.

[20] A. C. Baktir, A. Ozgovde, and C. Ersoy, “How Can Edge Computing Benefit From Software-Defined Networking: A Survey, Use Cases, and Future Directions,” IEEE Communications Surveys & Tutorials, vol. 19, no. 4, pp. 2359–2391, Fourthquarter 2017.

[21] X. Sun and N. Ansari, “EdgeIoT: Mobile Edge Computing for the Internet of Things,”

IEEE Commun. Mag., vol. 54, no. 12, pp. 22–29, Dec. 2016.

[22] G. Premsankar, M. Di Francesco, and T. Taleb, “Edge Computing for the Internet of Things: A Case Study,” IEEE Internet of Things Journal, vol. 5, no. 2, pp. 1275–1284, Apr. 2018.

[23] Y. Ai, M. Peng, and K. Zhang, “Edge computing technologies for Internet of Things: a primer,” Digital Communications and Networks, vol. 4, no. 2, pp. 77–86, Apr. 2018.

[24] P. Corcoran and S. K. Datta, “Mobile-Edge Computing and the Internet of Things for Consumers: Extending cloud computing and services to the edge of the network,” IEEE Consumer Electronics Magazine, vol. 5, no. 4, pp. 73–74, Oct. 2016.

[25] C.-H. Hong and B. Varghese, “Resource Management in Fog/Edge Computing: A Survey on Architectures, Infrastructure, and Algorithms,” ACM Comput. Surv., vol. 52, no. 5, pp. 1–37, Sep. 2019.

[26] Y. Simmhan, “Big data and fog computing,” arXiv [cs.DC], 27-Dec-2017.

[27] P. Castro, V. Ishakian, V. Muthusamy, and A. Slominski, “The server is dead, long live the server: Rise of Serverless Computing, Overview of Current State and Future Trends in Research and Industry,” arXiv [cs.DC], 07-Jun-2019.

[28] H. Shafiei, A. Khonsari, and P. Mousavi, “Serverless computing: A survey of opportunities, challenges and applications,” arXiv [cs.NI], 04-Nov-2019.

[29] G. Adzic and R. Chatley, “Serverless computing: economic and architectural

impact,” in Proceedings of the 2017 11th Joint Meeting on Foundations of Software Engineering, Paderborn, Germany, 2017, pp. 884–889.

[30] Z. Al-Ali et al., “Making Serverless Computing More Serverless,” in 2018 IEEE 11th International Conference on Cloud Computing (CLOUD), 2018, pp. 456–459.

[31] W. Lloyd, S. Ramesh, S. Chinthalapati, L. Ly, and S. Pallickara, “Serverless

Computing: An Investigation of Factors Influencing Microservice Performance,” in

2018 IEEE International Conference on Cloud Engineering (IC2E), 2018, pp. 159–169.

[32] M. Karakus and A. Durresi, “A survey: Control plane scalability issues and approaches in Software-Defined Networking (SDN),” Computer Networks, vol. 112, pp. 279–293, Jan. 2017.

[33] D. B. Rawat and S. R. Reddy, “Software Defined Networking Architecture, Security and Energy Efficiency: A Survey,” IEEE Communications Surveys & Tutorials, vol. 19, no. 1, pp. 325–346, Firstquarter 2017.

[34] K. Govindarajan, K. C. Meng, and H. Ong, “A literature review on Software-Defined Networking (SDN) research topics, challenges and solutions,” in 2013 Fifth International Conference on Advanced Computing (ICoAC), 2013, pp. 293–299.

[35] C. Dixon et al., “Software defined networking to support the software defined environment,” IBM J. Res. Dev., vol. 58, no. 2/3, p. 3:1-3:14, Mar. 2014.

[36] R. Atallah and A. Al-Mousa, “Heart disease detection using machine learning majority voting ensemble method,” conference on new trends in computing …, 2019.

[37] A. Bharadwaj, A. Srinivasan, A. Kasi, and B. Das, “Extending The Performance of Extractive Text Summarization By Ensemble Techniques,” in 2019 11th International Conference on Advanced Computing (ICoAC), 2019, pp. 282–288.

[38] Fauzi and Yuniarti, “Ensemble method for indonesian twitter hate speech

detection,” Indones. j. electr. eng. comput. sci., 2018.

[39] S. Segrera, J. Pinho, and M. N. Moreno, “Information-Theoretic Measures for Meta-

learning,” Hybrid Artificial Intelligence Systems, pp. 458–465, 2008.

[40] C. Giraud-Carrier, R. Vilalta, and P. Brazdil, “Introduction to the special issue on

meta-learning,” Mach. Learn., vol. 54, no. 3, pp. 187–193, Mar. 2004.

Advertisements