Thursday, September 29, 2016

Agile DevOps

If you're surprised at how frequently in this blog I mention articles from Communications of the ACM ("CACM"), you're missing out. Especially if you're a student, membership is inexpensive and the flagship monthly magazine tends to be full of really good stuff relevant to both practice and research.

Today I'm blogging about an article in the July 2016 issue (yes, I'm behind in my reading) talking about the "small batches principle" for Dev/Ops tasks. The article is written by a Dev/Ops engineer at Stack Overflow, the now-legendary site where all programmers go to give and receive coding help and which was co-founded by Joel Spolsky, one of my software heroes and author of the excellent Joel On Software blog and books.

This article talks about various in-house tasks associated with deploying software, such as pre-release testing and hot-patching, building and running a monitoring system, and other tasks that this company (and many others) historically did once in a great while. The month during which a new release was being tested and deployed became known as "hell month" because of the magnitude and pain of the tasks.

The article describes how Stack Overflow has migrated to a model of doing smaller chunks of work incrementally to avoid having to do very large chunks every few months; how they moved to a "minimum viable product" mentality (what is the simplest product you can build that will enable you to get feedback from the customer to validate or reject the features and product direction); how they adopted a "What can get done by Friday"-driven mentality, so that there would always be some new work on which their customers (in this case, the development and test engineers) could comment on; and so on.

Essentially, they moved to an Agile model of Dev/Ops: do work in small batches so each batch minimizes the risk of going off in the wrong direction; deploy incremental new changes frequently to stimulate customer feedback; thereby avoid painful "hell months" (possibly analogous to "merges from hell" on long lived development branches).

Agile applies to a lot of things, but this article does a nice job of mapping the concepts to the Dev/Ops world from the pure development world.

Tuesday, September 20, 2016

Flipped classroom? No thanks, I'd rather you lecture at me

As an early adopter and enthusiast of MOOCs, I've followed the "flipped classrooms" and "lecture is dead" conversations with some interest. Indeed, why would students attend lecture—and why would professors repeat last semester's lectures—when high-quality versions are available online for viewing anytime?

This semester, I'm once again teaching Software Engineering at Berkeley to about 180 undergraduates. (Past enrollments have been as high as 240, but we capped it lower this semester to make projects more manageable.) In the past, Dave Patterson and I have team-taught this class and we've developed a fairly well-polished set of lectures and other materials that are available as a MOOC on edX, Agile Development Using Ruby on Rails, and which we also use as a SPOC.

Last spring, by the end of the semester our lecture attendance had plummeted to about 15% of enrollment. We surveyed the students anonymously to ask how we could make lecture time more valuable. A majority of students suggested that since they could watch pre-recorded lectures, why not use lecture time to do more live coding demos and work through concrete examples? In other words, a classic "flipped lecture" scenario.

So this semester I dove in feet-first to do just that. Less than 20% of my lecture time has been spent covering material already in the recorded lectures; instead, we made those available to students from the get-go. The rest has consisted of live demos showing the concepts in action, and activities involving extensive peer learning, of which I'm a huge fan. I've even started archiving the demo "scripts" and starter code for the benefit of other instructors. The "contract" was that we would list which videos (and/or corresponding book sections) students should read or watch before lecture, and then during the lecture time (90 minutes twice a week) the students would be well prepared to understand the demo and ask good questions as it went along.

I told the students I would also sprinkle "micro-quizzes" into the lectures to spot-check that they were indeed reading/viewing the preparatory materials. The micro-quiz questions were intended to be simple recall questions that you'd have no trouble answering if you skimmed the preparatory material for the main ideas. (We use iClicker devices that are registered to the students, so we can map both responses and participation during lecture to individual students.)

Today in lecture, a bit more than 4 weeks into the course, I've officially declared the flipped experiment a failure. (Well, not a failure. A negative result is interesting. But it's not the result I expected.)

Since we made the pre-recorded lectures available within an edX  using the edX platform itself, and students have to login with their Berkeley credentials to access the course, we can see using edX Insights (part of the platform's built-in analytics) how many people are watching the videos.

According to the edX platform's analytics, the typical video is partially viewed by about 20 people. Only 45 people have ever watched any video to completion. Remember, this is in a class of 180 enrolled students, whose previous cohort specifically requested this format.

Maybe people are watching the videos after lecture rather than before? If they were, you'd expect video viewing numbers to be higher for videos corresponding to previous weeks, but they're not.

Maybe people are reading the book instead? If they were, the performance on the microquizzes should be nearly unimodal—they are simple recall questions that you cannot help but get right if you even skim the video or the book—but in fact the answer distributions are in some cases nearly uniform.

Maybe people already know the material from (e.g.) an internship? One or two students did approach me after today's lecture to tell me that. I believe them, and I know there are some students like this. But we also gave a diagnostic quiz at the beginning of the semester, and based on its results, very few people match this description.

Maybe students don't have time to watch videos or read the book before lecture? This is a 4-unit course, which means students should nominally be spending 12 hours per week total on it, of which lecture and section together account for only 4. The reading assignments, which can be done instead of watching the videos, average out to 15-20 pages twice a week, or 30-40 pages per week. Speaking as a faculty member leading an upper-division course in the world's best CS department at the world's best public university, I don't believe 30-40 pages of reading per week per course is onerous. Also, in past offerings of the course, we've polled students towards the end of the course to ask how many hours they are actually spending per week. The average has consistently been 8-10 hours, even towards the end where the big projects come due. So by the students' own self-reporting, there's 2-4 hours available to either do the readings or watch the videos.

As you might imagine, planning and executing live demos requires a couple of hours of preparation per hour of lecture to come up with a demo "script", stage code that can be copy-pasted so students don't have to watch me type every keystroke, ensure the examples run in such a way as to illustrate the correct points or techniques, remember what to emphasize in peer-learning questions at each step of the demo, and so on. But it's discouraging to do this if at most 1/9 of the students are doing the basic prep work that will enable them to get substantive value out of the live demo examples.

So at the end of lecture today I informed the students that I wouldn't use this format any longer—those who wanted to work through examples could come to office hours, which have been quiet so far—and I asked them to vote using the clickers on how future lectures should be conducted:

(A) Deliver traditional lectures that closely match the pre-recorded ones
(B) Cancel one lecture each week, replacing it with open office hours (in addition to existing office hours)
(C) Cancel both lectures each week, replacing both with open office hours (ditto)
(D) I have another suggestion, which I am posting right now on Piazza with tag #lecture
(E) I don’t care/I’ll probably ignore lecture regardless of format

Below are the results of that poll. The people have spoken. Less work for me, but disappointing on many levels, and I feel bad for the 20-30 people who do prepare and who have been getting a lot of value out of the more involved examples in "lecture" that go beyond the book or videos. And it doesn't seem like a great use of my time to do a live performance of material that's already recorded and on which I'm unlikely to improve, though it is a lot less work for me (if not as interesting).

(Note that the number of poll respondents adds up to 121, consistent with previous lectures this semester. So even in steady state at the beginning of the semester, 1/3 of the students aren't showing up, even though participation in peer-instruction activities using the clickers counts for part of the grade.)

(Update: I posted a link to this article on the Berkeley CS Facebook page, and many students have commented. One particularly interesting comment comes from a student who took my class previously: “When I took your class, as someone who rarely, if ever, went to lecture or prepared, the microquizzes helped me convince myself to do that. They weren't hard, so getting them wrong was just embarrassing to me. I was probably the best student I ever was until you stopped doing them my semester for reasons I can't recall and after that the whole thing fell apart for me.” So maybe I should just stick to my guns on the "read before lecture" part and make the microquizzes worth more than the ~10% they're currently worth...)

A year or so ago, I approached the campus Committee on Curriculum and Instruction to ask whether they'd be likely to approve a "lab" version of this course, in which there is no live lecture (only pre-recorded), the lecture hours are replaced with open lab/office hours, and all the other course elements (teaching assistants, small sections, office hours for teaching staff, etc.) are preserved. They said that would likely be fine. It's sounding like a better and better deal to me given that the majority of students want me to spend my time doing something nearly identical to what I've done in past semesters. And if previous semesters are any indication—and this has been very consistent since I started teaching this course in 2012—lecture attendance will fall off about halfway through. (I won't be surprised if the rationale given is that the lectures are available to watch online, even though the data we're gathering this semester shows that's clearly not happening.)

In an ideal universe, maybe we'd have 2 versions of the course, one tailored to people who do the prep work and want a fast-paced in-depth experience in lecture and another for people who prefer to be lectured to in the traditional manner. But we don't live in that ideal universe because of finite resource constraints. And one could argue that we already have that second version of the course—it's on edX and is the highest-grossing and one of the most popular courses on the platform, and it's free.

Instructors: What has your experience been flipping a large class?

Students: What would you do?

Wednesday, September 7, 2016

The hidden benefits of microservices

If you know me or read my writings, you probably know that I've become a big fan of Communications of the ACM, the monthly magazine of the Association for Computing Machinery, the world's largest and most prestigious professional society for computing. The August issue had a very nice essay by one of the original Amazon engineers on the hidden benefits of moving to a microservices-based architecture.

The appearance of the article is particularly timely since I just started teaching Engineering Software as a Service to about 200 Berkeley undergraduates in this fall semester, and a service-oriented approach to thinking about SaaS is a cornerstone of the course.

I'm particularly interested in an Amazonian's views since in the ESaaS course and accompanying book we cite Amazon's early move to services as an example of a tech company being ahead of an important curve.

The article is short and well worth a read, especially as it's written by an engineer who was there from the first days of the "monolithic" Amazon.com site and through the transition to the service-oriented architecture Amazon has today, including many services also made available publicly through AWS. Essentially, the 7 benefits apply as long as your APIs are stable and documented, and you have a tight dev/ops team keeping the service reliably running:

  1. Permissionless innovation: no need to ask "gatekeepers" for permission to add/try new features, as long as the features don't interfere with existing use of the API.
  2. Forced to design for failure: cross-service failures are hard to debug, so there's a strong motivation to "design for debuggability/diagnosis" so that many failure types become routine and even automatically recoverable at the individual microservice level.
  3. Forced to disrupt trust boundaries: within small teams, mutual trust is easier since everyone can sign off on every commit. Large teams can't do this, and a microservices architecture both enables small teams and forces coming to terms with the fact that the same level of trust that exists within a team cannot always stretch across API boundaries. This is good for scaling an organization, as Conway's Law suggests ("Any organization will produce a system design that mirrors the organization's internal communication structure").
  4. A service's developers are also its operators, so they are very close to their customers, whether those customers are end users or other microservices. The tighter customer feedback loop accelerates the pace of improvement in the service.
  5. Makes deprecation somewhat easier: if you version the API it becomes clear who "really" cares about the legacy versions.
  6. Makes it possible to change the schema or even the persistence model for data and metadata without permission from others.
  7. Allows separating out particularly sensitive functions as microservices (e.g. handling sensitive data such as financial or health) and "concentrating the pain" (and focus) of how best to steward that data.
  8. Encourages thinking more aggressively about testing from the outside as well as the inside: continuous deployment, "smoke tests" where an experimental feature is rolled out fractionally to see if anything breaks, and phased deployment of new features can all improve the precision of the test suite and result in lower time-to-repair when something breaks in production.
As the author points out, moving from an existing monolithic app to microservices isn't easy (former Director of Engineering at Twitter, Raffi Krikorian, explained what a big move this was for Twitter and why the "Rails doesn't scale" meme wasn't really an accurate description of what happened there). And unless your microservices architecture really does enable the above behaviors, you're probably not 100% of the way there. But this is clearly the way SaaS architecture is going, and will soon be its own new chapter in ESaaS.