I work on Google’s air travel infrastructure team, which powers Google Flight Search. Last month, I did a #HangoutOnAir with university students in India. The main focus of that talk was to introduce the students to the sort of challenges that we face. The talk is up on Google Plus, and has since been split into two parts.
In the first part (which is targeted for a general audience), I cover the fundamentals of flight booking, tackling issues like routing (10,000+ routes for SFO-JFK!?), seat availability, fare codes, pricing and more. It answers questions such as, “why did you pay $50 more than the person sitting next you in the plane for the same ticket?”, and “why are tickets on this flight from SFO to JFK available when flying SFO to BOS via JFK, but not available for a non-stop flight from SFO to JFK?”
In the second part, I get into the computer sciency details of the complexity of airfare search (even the simplest versions of this problem are NP-hard), and provide a (over)simplified description of QPX — the engine that Google uses to search and price airfare tickets. This talk is targeted at computer science students and professional.
Abstract: Failure detectors are oracles that have been introduced to provide processes in asynchronous systems with information about faults. This information can then be used to solve problems otherwise unsolvable in asynchronous systems. A natural question is on the “minimum amount of information” a failure detector has to provide for a given problem. This question is classically addressed using a relation that states that a failure detector D is stronger (that is, provides “more, or better, information”) than a failure detector D’ if D can be used to implement D’. It has recently been shown that this classic implementability relation has some drawbacks. To overcome this, different relations have been defined, one of which states that a failure detector D is stronger than D’ if D can solve all the time-free problems solvable by D’. In this paper we compare the implementability-based hierarchy of failure detectors to the hierarchy based on solvability. This is done by introducing a new proof technique for establishing the solvability relation. We apply this technique to known failure detectors from the literature and demonstrate significant differences between the hierarchies.
Abstract: We propose two algorithm for solving self-stabilizing dining with Crash Locality 1 in asynchronous shared-memory systems with safe registers. Since this problem cannot be solved in pure asynchrony, we augment the shared-memory system with failure detectors. Specifically, we introduce the anonymous eventually perfect failure detector ?<>P (a variant of the anonymous perfect failure detector introduced by Guerraoui et al.), and show that this failure detector is sufficient to solve the problem at hand.
Abstract: Dining philosophers is a scheduling paradigm that determines when processes in a distributed system should execute certain sections of their code so that processes do not execute `conflicting’ code sections concurrently, for some application-dependent notion of a `conflict’. Designing a stabilizing dining algorithm for shared-memory systems subject to process crashes presents an interesting challenge: classic stabilization relies on all processes continuing to execute actions forever, an assumption which is violated when crash failures are considered. We present a dining algorithm that is both wait-free (tolerates any number of crashes) and is pseudo-stabilizing. Our algorithm works in an asynchronous system in which processes communicate via shared regular registers and have access to the eventually perfect failure detector $\Diamond P$. Furthermore, with a stronger failure detector, the solution becomes wait-free and self-stabilizing. To our knowledge, this is the first such algorithm. Prior results show that $\Diamond P$ is necessary for wait-freedom.