The upgrades in Software Development in the Cloud environment really needs to be rethinked. The model of software has not changed since its emergence: applications run, and then applications run on the platform. But due to the rapid development of infrastructure, the basic principles of application design and deployment do change from time to time—sometimes such changes are still very intense.
For example, the appearance of the PC, x86 architecture and the birth of the client / server model in the 1980s brought about tremendous changes in application design principles. Then, with the advent of the web and open source technology in the mid-1990s, it changed again. Every time such a huge change occurs, is forced to reflect on the way the software is developed and deployed.
Now the infrastructure capabilities have taken a new leap, and its dominant is Amazon Web Services (especially on the premise that the network speed has been greatly improved).
Keep Reading More
- Five Misunderstandings of Big Data Analytics
- Why Data Analytics Skills is In-Demand: Salary Quotes
- Public Opinion Monitoring in the Era of Big Data Trends
- Cloud Computing Data Protection should Improve Resilience
- Big Data Basic Terms You Should Know in the Era of Big Data Analytics
Obviously, in order to be able to take full advantage of the new cloud facilities, those applications that succeed on AWS must be fundamentally different from applications running on enterprise servers—even if they are different from applications running on virtual servers. In addition to this, there are other factors that determine the design of cloud applications must be different from the past.
Listed below are some of the key factors, which also determine how the old and new worlds evolve:
The expansion of the old world is achieved through expansion-to accommodate more users or data, you only need to buy a larger pair of servers.
In the new world, scalability is usually achieved through horizontal expansion. It is not a larger machine that needs to be added, but multiple machines of the same kind. In the cloud world, those machines are virtual machines.
In the past, software was unreliable and flexibility was achieved at the hardware layer. Today, the underlying infrastructure hardware is seen as a weak link, so applications must self-adjust to adapt. The application does not guarantee that every virtual machine instance is working properly. It does not matter if a single virtual machine fails for a period of time, and the application must be prepared for it.
Take Netflix as an example. This can be said to be the most advanced cloud user, and it has taken the longest steps on the road of cloud applications. They have a process called ChaosMonkey that randomly kills virtual machine instances under application load. What is the purpose of this? It is to ensure the normal operation and resilience of the application: by allowing the application to face random instance loss to force the application to develop more flexible applications.
In the old world, the load of applications such as finance and payroll is generally stable and predictable. The number of system users and the number of records to be processed at a particular moment are basically known.
In the new world, workloads are changeable and unpredictable. Today’s software systems must reach farther out. To reach consumers and devices with service needs, time is unpredictable and load cannot be measured (think about the 12306 website that has become the target of public criticism). To adapt to these unforeseen fluctuations of independent application loads requires a new architecture. Although we are now on the cloud, we are obviously still in the early stages.
In the past, software did not have much diversity. Every application is written in one language and uses a database. Companies generally rely on one or a few operating systems. The software stack is boringly simple (at least for now).
In the new world of cloud, the situation is quite different. Many different languages, different libraries, different toolkits, and different database products may be used in an application. At the same time, because you can create and start your own image in the cloud, and customize it according to specific needs, a company’s applications must be able to run in a variety of different configurations.
From virtual machine to cloud
Even the relatively new hypervisor is different from the modern cloud thinking. The hypervisor developed by the pioneer of virtualization and VMware has basically the same performance as a physical machine.
In the cloud, what is virtual is not a representative of a physical server, but a representative of a computing unit.
In the old world, users are educated to be patient. Because the response of the system may take a long time to complete some simple extraction or update requests, the addition of new features is also very slow.
Topics related to Software development in the cloud environment needs to be rethinked
- cloud computing and software development
- rethinking programming and software development
- best cloud native programming language
- ballerina programming language
- wso2 programming language in software development
- ballerina programming language tutorial
- software development news
- ballerina programming language review
In the new world, users are impatient. They can hardly tolerate time delay and are unwilling to wait. They want the software to be updated frequently, if not every day, at least every week. You can find relevant evidence in self-service IT . There, instead of handing out a note to the IT department and waiting for a few days to respond, the resources required by the user can be self-provided.