top of page
Search
shaquakoza

Engineering Software As A Service Armando Fox Pdf



Call for Participation [PDF]The rapid and continuing growth of the software industry creates opportunities and challenges for software engineering education. For example, how can educators effectively meet the need for large numbers of software engineers? How can we tailor the training of the software engineers to industrial needs at different levels? How can we introduce experience from industry into the curriculum? Also, how can software engineering principles be integrated with each country's special culture?




Engineering Software As A Service Armando Fox Pdf




Title: Myths About MOOCs and Agile - Slides from the keynoteWhile the media's infatuation with MOOCs continues unabated, legislation around MOOCs is racing ahead of pedagogical practice, and a recent opinion piece expresses grave concerns about their role ("Will MOOCs Destroy Academia?", Moshe Vardi, CACM 55(11), Nov. 2012). In the first part of this talk, I will try to bust a few MOOC myths by presenting provocative, if anecdotal, evidence that appropriate use of MOOC technology can *improve* on-campus teaching, increase student throughput while actually increasing course quality, and help instructors reinvigorate their teaching. The second part of the talk is a case study based on UC Berkeley's Software Engineering course, in which students use Agile approaches and leverage EdX MOOC technology (Berkeley's first and Coursera's first) in an open-ended design project. We agree with many of our colleagues that Agile is superior to disciplined or "Plan-and-Document" methodologies for such projects. Yet the new 2013 ACM/IEEE curriculum standard for software engineering, which places heavy emphasis on such projects, is heavily focused on Plan-and-Document terminology. Hence our question: If instructors follow the field's guidelines and use Agile in a classroom project, can their course fulfill the requirements of the new curriculum standard? Happily, the nonobvious answer is "yes"; I'll explain why, and the role of MOOCs in improving our on-campus course and enabling other instructors to replicate and build on our work.


The International Conference on Software Engineering, ICSE, provides programs where researchers, practitioners, and educators present, discuss, and debate the most recent innovations, trends, experiences, and challenges in the field of software engineering. ICSE 2013, the 35th in the conference series, encourages contributors from academia, industry, and government to share leading-edge software engineering ideas with inspirational leaders in the field. All events are at the Hyatt Regency San Francisco, right in the heart of the Embaradero District, in view of the San Francisco Bay and the Golden Gate bridge. --ICSE 2013 website


Cloud computing, the long-held dream of computing as a utility, has the potential to transform a large part of the IT industry, making software even more attractive as a service and shaping the way IT hardware is designed and purchased. Developers with innovative ideas for new Internet services no longer require the large capital outlays in hardware to deploy their service or the human expense to operate it. They need not be concerned about overprovisioning for a service whose popularity does not meet their predictions, thus wasting costly resources, or underprovisioning for one that becomes wildly popular, thus missing potential customers and revenue. Moreover, companies with large batch-oriented tasks can get results as quickly as their programs can scale, since using 1,000 servers for one hour costs no more than using one server for 1,000 hours. This elasticity of resources, without paying a premium for large scale, is unprecedented in the history of IT.


Cloud computing refers to both the applications delivered as services over the Internet and the hardware and systems software in the data centers that provide those services. The services themselves have long been referred to as Software as a Service (SaaS).a Some vendors use terms such as IaaS (Infrastructure as a Service) and PaaS (Platform as a Service) to describe their products, but we eschew these because accepted definitions for them still vary widely. The line between "low-level" infrastructure and a higher-level "platform" is not crisp. We believe the two are more alike than different, and we consider them together. Similarly, the related term "grid computing," from the high-performance computing community, suggests protocols to offer shared computation and storage over long distances, but those protocols did not lead to a software environment that grew beyond its community.


The data center hardware and software is what we will call a cloud. When a cloud is made available in a pay-as-you-go manner to the general public, we call it a public cloud; the service being sold is utility computing. We use the term private cloud to refer to internal data centers of a business or other organization, not made available to the general public, when they are large enough to benefit from the advantages of cloud computing that we discuss here. Thus, cloud computing is the sum of SaaS and utility computing, but does not include small or medium-sized data centers, even if these rely on virtualization for management. People can be users or providers of SaaS, or users or providers of utility computing. We focus on SaaS providers (cloud users) and cloud providers, which have received less attention than SaaS users. Figure 1 makes provider-user relationships clear. In some cases, the same actor can play multiple roles. For instance, a cloud provider might also host its own customer-facing services on cloud infrastructure.


We argue that the construction and operation of extremely large-scale, commodity-computer data centers at low-cost locations was the key necessary enabler of cloud computing, for they uncovered the factors of 5 to 7 decrease in cost of electricity, network bandwidth, operations, software, and hardware available at these very large economies of scale. These factors, combined with statistical multiplexing to increase utilization compared to traditional data centers, meant that cloud computing could offer services below the costs of a medium-sized data center and yet still make a good profit.


Although they have not done so, cloud vendors could offer specialized hardware and software techniques in order to deliver higher reliability, presumably at a high price. This reliability could then be sold to users as a servicelevel agreement. But this approach only goes so far. The high-availability computing community has long followed the mantra "no single point of failure," yet the management of a cloud computing service by a single company is in fact a single point of failure. Even if the company has multiple data centers in different geographic regions using different network providers, it may have common software infrastructure and accounting systems, or the company may even go out of business. Large customers will be reluctant to migrate to cloud computing without a business-continuity strategy for such situations. We believe the best chance for independent software stacks is for them to be provided by different companies, as it has been difficult for one company to justify creating and maintain two stacks in the name of software dependability. Just as large Internet service providers use multiple network providers so that failure by a single company will not take them off the air, we believe the only plausible solution to very high availability is multiple cloud computing providers.


The cloud user is responsible for application-level security. The cloud provider is responsible for physical security, and likely for enforcing external firewall policies. Security for intermediate layers of the software stack is shared between the user and the operator; the lower the level of abstraction exposed to the user, the more responsibility goes with it. Amazon EC2 users have more technical responsibility (that is, must implement or procure more of the necessary functionality themselves) for their security than do Azure users, who in turn have more responsibilities than AppEngine customers. This user responsibility, in turn, can be outsourced to third parties who sell specialty security services. The homogeneity and standardized interfaces of platforms like EC2 make it possible for a company to offer, say, configuration management or firewall rule analysis as value-added services.


The primary security mechanism in today's clouds is virtualization. It is a powerful defense, and protects against most attempts by users to attack one another or the underlying cloud infrastructure. However, not all resources are virtualized and not all virtualization environments are bug-free. Virtualization software has been known to contain bugs that allow virtualized code to "break loose" to some extent. Incorrect network virtualization may allow user code access to sensitive portions of the provider's infrastructure, or to the resources of other users. These challenges, though, are similar to those involved in managing large non-cloud data centers, where different applications need to be protected from one another. Any large Internet service will need to ensure that a single security hole doesn't compromise everything else.


One last security concern is protecting the cloud user against the provider. The provider will by definition control the "bottom layer" of the software stack, which effectively circumvents most known security techniques. Absent radical improvements in security technology, we expect that users will use contracts and courts, rather than clever security engineering, to guard against provider malfeasance. The one important exception is the risk of inadvertent data loss. It's difficult to imagine Amazon spying on the contents of virtual machine memory; it's easy to imagine a hard disk being disposed of without being wiped, or a permissions bug making data visible improperly. 2ff7e9595c


1 view0 comments

Recent Posts

See All

Comments


bottom of page