Flps Homework Assignments

In this lecture we'll see the,

um, FLP proof of the impossibility of consensus

in asynchronous distributed systems.

So consensus is impossible to solve

in the asynchronous distributed system.

This is a result that was proved,

uh, in a now-famous result by Fischer, Lynch and Patterson

in 1983, also known as the FLP result,

using the, uh, first letters of the last names

of the three coauthors of that paper.

Uh, before then, many distributed systems designers

had been claiming 100% reliability,

uh, for their systems,

and this result stopped them dead in their tracks.

A lot of claims of reliability vanished overnight,

and one of the, uh, long-term side effects of this

was the multiple 9s of reliability,

um, offerings that vendors now publicize

for their products and services.

So once again, just to remind you, um,

the asynchronous distributed system has

no bounds on message delays and p-or processing delays

or, uh, clock drift rates.

These might be arbitrarily long or arbitrarily short.

The consensus problem requires each process, uh, p,

uh, to decide on the same value.

So each process p has a state

which consists of the program counter, the registers,

the stack, the local variables, the heap,

anything else that you would consider to be a part of the,

uh, core, uh, dump of the process.

It also has initial, um, an input register xp,

which is initially either 0 or 1.

Different processes might have different,

uh, input register values,

that is, those processes', uh, um, proposal to the group,

and then each process also has an output register yp

which is initially undecided but needs to be set to 0 or 1.

The only constraint is once you set the output register

you cannot change it.

And remember that each,

uh, process has its own output register,

um, and you want to make sure, uh, that the consensus,

uh, uh, protocol at the end, um,

uh, has all the non-faulty processes

set their output variables to be all-0s or-or all-1s.

So you want an all-0s decision

among the non-faulty processes

or an all-1s decision among the non-faulty processes.

Once again, this problem just by itself, just with these, uh,

two, uh, constraints is enough, uh, to solve consensus

because you can just say,

"Well, everyone set their output variables to be 0

all the time, and that solves it,"

but that is not interesting or, uh, useful at all.

So we have the non-triviality clause that says that

at least one initial system state leads to each

of the above outcomes,

meaning that at least one initial system state

leads to an all-0s outcome,

and at least one initial system state

leads to an all-1s outcome.

So let's try to set up the proof.

Uh, for the impossibility proof,

uh, we'll consider a more restrictive system model

and an easier problem.

Well, essentially this is okay because if you can show that

an easier problem is also impossible to solve,

then obviously the consensus problem,

the harder consensus problem,

is easier to solve, is impossible to solve.

Uh, if you also sh-if you show

that in a more restrictive system model,

uh, the consensus problem is, uh, impossible to solve,

then obviously in the less restrictive system model,

um, it's impossible to solve, as well.

So, uh, what do I mean by restrictive system model?

Uh, instead of considering the entire network,

we'll consider the network to be a simple global message buffer.

When a process p sends a message to a process p', message m,

the message m just gets deposited

in the global message buffer.

Subsequently, p' may, uh, call the global message buffer

with, uh, receive,

and this may return either the message m that is waiting for it

or it may return null,

and it may continue returning null, uh, for a while,

or maybe even forever,

because the message m might be delayed for arbitrarily long.

Okay, so we have abstracted out our network to be just this,

uh, global message buffer

that is sitting underneath all the processes.

Uh, we also define, uh, the state of a process,

which we have seen before.

It consists of its, uh, program counter, heap,

registers, stack and everything else,

along with the input and output variables,

but we also define a state for the entire system.

We call this global state a configuration.

Uh, now, uh, the configuration or global state

consists of a collection of states,

one state for each process,

alongside the state of the global buffer itself.

So the state of all the processes

along with the state of the network is,

uh, the global state, or the configuration.

Now we also define an event.

This is slightly different from the Lamport events

you've seen before.

An event consists of, uh,

three steps which are executed atomically, or in one shot.

Uh, the event starts with the receipt of a message

by a process, say a process p.

Then the message is processed by the process p.

Uh, this may change the recipient's state.

Process p's state might change as a result of this,

and p might also resi-decide to send out some messages,

as a result of this receipt,

and those messages that then result,

are then deposited in the global message buffer.

Uh, so all these three, uh, steps put together,

uh, determine an event.

So an event essentially consists

of a process p receiving a message,

uh, processing it and then depositing the resulting,

uh, messages into the global message buffer and then,

uh, that's an event.

Next we'll define a schedule.

A schedule is simply a linear sequence of events,

so one event followed by another event followed by another event,

that's a schedule, okay?

So here on the left is an example of a schedule.

You start with a configuration or a global state;

we label that as c.

An event e' is applied to it.

This means that process p' receives m',

uh, processes it and deposits any resulting messages

into the global message buffer.

That changes the state of process p'.

It also changes the state

of the global message buffer potentially,

and that means the configuration itself has changed,

and it has changed to something else which we label as c'.

A different event, e'',

will change the configuration again similarly to, uh,

another configuration c''.

Now, these two events, e' followed by e'',

is a schedule, because it's a linear sequence of events,

and we call that a schedule s.

When the schedule s is applied on the initial configuration c,

this c here is the same as this c here,

it results in the configuration c''.

Again, this c'' is the same as the c, this c''.

So the left side of the slide

is equivalent to the right side of the slide.

'Kay, so the schedule is, essentially,

a compact way of representing a sequence of events

rather than mentioning each event separately.

So, here is our first, uh, lemma, or our first result.

It says that disjoint schedules are commutative.

What does this mean?

If I have a schedule s1,

consisting of some sequence of events,

and another schedule s2,

consisting of another sequence of events,

if the sets of receiving processes in s1,

remember that each schedule consists of a set of messages

being received as a set of processes,

if I consider all these processes in s1,

which received messages,

and all the processes in s2, that receive messages,

if these two sets are disjoint,

meaning there are no processes in common,

uh, receiving messages in both s1 and s2,

then these schedules are commutative.

In other words, their order can be flipped.

So if you apply s1 first on a configuration c

and then s2 to reach, to reach a state c'',

then in a different scenario,

if you apply s2 first on, uh, c, and then s1,

you would reach the same final configuration,

c'', okay?

So, uh, w-why is this true?

Well, um, this is true because

these are disjoint sets of receiving processes,

and applying s1 or s2 in any order

would result in the same final outcome.

In fact, interweaving s1 and s2

would also result in the same outcome,

which would be the configuration c''.

So earlier, uh, we saw consensus problem here.

We tried to prove the impossibility

about an easier consensus problem where some process,

not just all, but some process, eventually sets its yp variable,

its upper variable, to be 0 or a 1.

'Kay, and also we'll assume that only one process crashes,

but we are free to choose which process, uh, crashes.

Uh, we define configurations to have valences.

Uh, configuration C may have

a set of decision values V reachable from it,

and since we are only considering 0 or 1 decisions,

um, there might be either 2 or 1 decisions reachable from it.

If both the decisions, both,

and all-0s and an all-1s outcome are reachable from it,

then we say that the size of the valency is 2,

and we say that the configuration C is bivalent.

If only one decision is reachable from it,

either a 0, an all-0s decision,

or an all-1s decision, uh, not both, uh,

then the configuration C is said to be, uh,

0-valent or 1-valent, respectively.

Bivalent essentially means

that the configuration C is unpredictable.

That is, it doesn't know which value it's going to reach,

and essentially what we're going to show is that, um,

a system can always be kept in the bivalent state.

That it-that is, it can always be prevented

from ever making a decision

and ever being sure about its decision.

So the FLP proof shows two things.

First, it shows that there is some initial configuration,

the global state, that is bivalent by itself,

and second it shows that there is some sequence,

there is always some sequence of events that happen,

uh, in a system that start from a bivalent configuration

that keeps the system state or the configuration also bivalent.

So, essentially, there is always some, uh,

things that could happen in the system, um, that, uh,

keeps the system moving

from one bivalent configuration to another,

and so prevents the system from ever reaching a decision,

uh, with respect to consensus.

So let's show the first, um, part of this proof,

that's the second lemma.

Uh, i-uh, here we show that

some initial configuration is bivalent.

Well let's assume, uh, that this is not true;

let's prove it by contradiction.

Let's assume that all initial configurations are either

0-valent or 1-valent, okay?

Now, if there are N processes in the system,

there are 2^N positive initial configurations.

Well, why is this?

Well each of the processes can propose either 0 or 1

for its input variable,

and so you have two possibilities for each process,

and so this means that there are 2^N

possible initial configurations.

Now we create a lattice here,

this is, of course, a virtual lattice,

uh, where the nodes in the lattice

are the initial configurations,

so there are 2^N, uh, nodes in this lattice.

This lattice is essentially a hypercube, uh, with dimension N.

uh, we, uh, link two, uh, configurations together,

we join them by an edge, uh,

if they're d-if they differ in the initial xp,

the initial input variable value for exactly one process, okay?

Uh, this means that, uh,

you know, suppose I have, uh, 2 processes, P1 and P2,

uh, then I'm going to have a lattice that has, uh, four, um,

uh, nodes in it, four initial configurations

where the initial values for, uh, the, um, uh, for the, uh,

uh, uh, ini-for the input variables are 0,0,1,0,1,1,

and um, uh, 0,1.

And in this, uh, the 0,0 node is going to be linked

to the 1,0 node because they,

uh, differ in the input variable values for P1 only,

exactly 1 process.

Also, the 1,1, uh, node is going to be linked to the, um,

1,0 node because, uh, these 2 configurations differ

in the input variable values for P2.

And so, essentially, the hypercube in this 2 process case

looks like a square.

The hypercube for the 3 process case

looks like a cube,

and so on and so forth, okay?

Now, essentially, um, this, uh, here we are saying

that each configuration is either 0-valent or 1-valent,

there are no bivalent configurations.

So we tag each configuration with a 0 or a 1

according to its, uh, valency, either 0-valent or 1-valent.

And because there is at least one 0-valent state,

at least one configuration stacked with a 0,

and at least one 1-valent state,

at least one configuration tagged with a 1,

and because everything is connected, uh,

in just one hypercube,

it has to be the case that at least one 0-valent configuration

is linked directly to a 1-valent configuration, okay?

This, you can, uh, imagine the hypercube and, uh,

you will see that this is true.

So this means that these two configurations differ

in the input variables for exactly one process,

say that process is p,

and let's say we consider around

where this process p has crashed;

that is, it is silent throughout.

Both the initial configurations are indistinguishable,

because the only thing that differed

between these configurations is the state of p,

but p has crashed, so as far as the system running is concerned,

p has no effect on it,

but this means that both these initial configurations

are in fact the same.

One of them will, in fact, result in an all-0's outcome,

the 0-valent configuration,

and the other one will result in a one-in an all-1's outcome

because it's a 1-valent configuration.

So this initial configuration,

either one of these two configurations

where p has crashed is, in fact, a bivalent configuration

because it may result in an all-0's decision

or it may result in an all-1's decision.

'Kay, so we have shown

that when you have one process that could crash,

and we can choose which process is the one that crashes,

you can have at least one bivalent configuration

that is the initial configuration in the system.

Okay, so that's the first part of the proof.

Next we need to show that,

um, starting from a bivalent configuration,

there is always another bivalent configuration that is reachable.

Notice that this proof doesn't say

that you can never reach consensus ever.

It says that there is always some way in which

you can be prevented from reaching consensus.

Let the red configuration be a bivalent configuration,

and let, uh, the event e,

which consists of the process p receiving a message m,

that is the global message buffer in the red configuration,

the sum event that is applicable to the initial configuration,

so m is in the global message buffer in the red configuration.

Now let's put our hand on e and prevent e from being applied.

This means that you might be able to still apply

some other events on the red configuration,

and there are some configurations

that you might be able to reach,

starting from this red configuration.

We call that set to be C, okay?

Those are the blue configurations

that are shown in the triangle.

These are the configurations

that are reached without applying the special event e.

why are we not applying e?

You'll see in a moment, there is a reason for it.

Now, if you take any one of these

blue or the red configurations in the-in the triangle,

and you apply the single event e to it,

the special event e to it,

you will reach another dark blue event.

Let that set of events, the dark blue set of events,

be called as D, 'kay?

Once again, D, any event in,

any configuration D is reached by applying the special event e

on any one of the configurations in the triangle.

Now, this is the summary of what we have discussed.

You have the initial bivalent configuration, the red one.

You don't apply the event e to it, and you reach,

and all the possible states, uh,

that are reachable are in the triangle.

You take one of the configurations in the triangle

and you apply the event e to it,

you'll reach a state that is,

or a configuration that is in the set D.

Okay, so we claim that

the set D contains a bivalent configuration, okay,

and, again, the proof here is by contradiction.

If you can show that the state D contains

a bivalent configuration,

then you can show that

there is a sequence that consists of at least one event

that starts from a bivalent configuration, the red one,

that also leads to another bivalent configuration.

Let's assume the contradiction.

Suppose that D only has 0 and 1-valent contradiction, uh,

configurations and no, uh, bivalent ones.

Okay, So there are these states in D,

COMP 150FLP: Functional and Logic Programming

Course Web page (this page) http://www.eecs.tufts.edu/comp/150FLP/

Prerequisites: COMP 80 or permission by instructor.

Class Times: Tuesday, Thursday 2:30-3:45pm. Room: H-106.

Instructor:
Roni Khardon
Office: Halligan 230
Office Hours: Tue 12:30-1:30, Thu 10-11
Phone: 1-617-627-5290
Email: roni@cs.tufts.edu

Course Structure and Marking

  • See Course syllabus in department's list
  • We will have roughly half of the course for each language (Prolog, ML), starting with Prolog
  • The mark will be assigned by a combination of: regular small homework assignments (30%), regular brief class quizzes (30%), a larger programming project (40%). I expect to have roughly 10 small homework assignments and quizzes.
  • You may miss one homework and one quiz without penalty. Alternatively, if you submit all of them, the one with the lowest grade will not count in calculating the average.
  • Quizzes are 10-15 minutes long. They will take place in the beginning of class on Thursdays (not every week). I will let you know a week in advance.
  • The Project:
    • The project will involve a large programming effort in Prolog and/or ML. The topic is for you to choose - pick your favorite problem - but if you have a hard time I can make suggestions.
    • You may choose to solve the same problem in both Prolog and ML so as to experience the differences and similarities first hand. Or you may choose to program in only one of the languages in which case the scope should be slightly larger.
    • The project is due on the last day of classes, December 10th.
    • By Tuesday October 30 you should approach me with suggestions for topics to make sure they are not too large/small for the purpose.
    • By Thursday November 8 I would like to have a 1-page written proposal outlining what the project involves.

Required Texts

  • PROLOG Programming for Artificial Intelligence, Ivan Bratko, Addison Wesley, 2001.
  • Elements of ML Programming, ML97 Edition, Jeffrey D. Ullman, Prentice Hall, 1998.

Alternative/Recommended Texts

  • Programming in Prolog (4th Edition), W. F. Clocksin, C. S. Mellish, Springer-Verlag, 1994.
  • The Art of Prolog : Advanced Programming Techniques, Leon Sterling, Ehud Shapiro MIT Press, 1994.
  • Introduction to Programming Using SML, Michael R. Hansen, Hans Rischel, Addison Wesley, 1999.
  • ML for the working Programmer (2nd Edition), L.C. Paulson, Cambridge University Press, 1997.

Programming

  • We will use: SWI-Prolog and Standard ML of New Jersey
  • Both are free (please see license conditions) - so if you wish you can download versions to use on your own computer.
  • The sites linked above include the software as well as documentation and other information and resources for Prolog and ML.
  • Here is a local copy of the swi-prolog manual (pdf format)
  • Here is a local copy of the sml/nj manual (postscript format)
  • The site for the text Elements of ML Programming has an errata list, problem solutions and the programs from the text on-line.

Program Examples from Class

  • Prolog are available here
  • ML are available here

Assignments

0 comments

Leave a Reply

Your email address will not be published. Required fields are marked *