lecture 12 - stanford universityweb.stanford.edu/.../cs161/cs161.1176/slides/lecture12.pdflecture 12...
TRANSCRIPT
Lecture12Bellman-Ford,Floyd-Warshall,andDynamic
Programming!
Announcements
• HW5dueFriday• Midtermshavebeengraded!• Pickupyourexamafterclass.• Average:84,Median:87• Max:100(x4)
• Iamveryhappywithhowwelly’all did!• Regradepolicy:• WriteoutaregraderequestasyouwouldonGradescope.• Handyourexamandyourrequesttome afterclassonWednesdayorinmyofficehoursTuesday(orbyappointment).
Lasttime
• Dijkstra’salgorithm!• Solvessingle-sourceshortestpathinweightedgraphs.
u
v
a
b
t
3 325
2
13
16
11
21
s
Today
• Bellman-Fordalgorithm• Anothersingle-sourceshortestpathalgorithm
• Thisisanexampleofdynamicprogramming• We’llseewhatthatmeans
• Floyd-Warshall algorithm• An“all-pairs”shortestpathalgorithm• Anotherexampleofdynamicprogramming
Recall• Aweighteddirectedgraph:
u
v
a
b
t
3 32
5
2
13
16
1
• Weights onedgesrepresentcosts.
• Thecostofapathisthesumoftheweightsalongthatpath.
• Ashortestpathfromstotisadirectedpathfromstotwiththesmallestcost.
• Thesingle-sourceshortestpath problemistofindtheshortestpathfromstovforallvinthegraph.
1
21
Thisisapathfromstotofcost22.
s
Thisisapathfromstotofcost10.Itistheshortestpathfromstot.
OnedrawbacktoDijkstra• Mightnotworkwithnegativeedgeweights• Onyourhomework!
u
v
a
b
t
3 32
5
-2
13
16
-1
-1
21
s
Whywouldweeverhavenegativeweights?• Negativecostsmight
meanbenefits.• eg,itcostsme-$2
whenIget$2.
Bellman-FordAlgorithm
• Slower(butarguablysimpler)thanDijkstra’salgorithm.• Workswithnegativeedgeweights.
Bellman-FordAlgorithm
• Wekeep*anarrayd(k) oflengthnforeachk=0,1,…,n-1.
t-2
s
u
v
5
2
2
1
*Wewon’tactuallystoreallthese,butlet’spretendwedofornow.
d(k)[b]isthecostoftheshortestpathfromstobwithatmostkedgesinit,forallbinV.
suvt
d(0)
suvt
d(1)
suvt
d(2)
suvt
d(3)
Formally,wewillmaintaintheloopinvariant:
• Forexample,this istheshortestpathfromstotwithatmosttwoedgesinit.
• Butit’snottheshortestpathfromstot(withanynumberofedges).• That’sthisone.
Bellman-FordAlgorithm
• Wekeep*anarrayd(k) oflengthnforeachk=0,1,…,n-1.
t-2
s
u
v
5
2
2
1
*Wewon’tactuallystoreallthese,butlet’spretendwedofornow.
d(k)[b]isthecostoftheshortestpathfromstobwithatmostkedgesinit,forallbinV.
0
∞
∞∞
0 ∞ ∞ ∞
suvt
d(0)
suvt
d(1)
suvt
d(2)
suvt
d(3)
Formally,wewillmaintaintheloopinvariant:
Nowupdate!
• Wewillusethetabled(0) tofillind(1)
• Thenused(1) tofillind(2)
• …• Thenused(k-1) tofillind(k)
• ...• Thenused(n-2) tofillind(n-1)
Thiseventuallygivesuswhatwewant:• d(k)[a]istheshortestpathfromstoawithatmostkedges.• Eventuallywe’llgetalltheshortestpaths…
d(k)[b]isthecostoftheshortestpathfromstobwithatmostkedgesinit.
Whilemaintaining:
Howdowegetd(k)[b]fromd(k-1)?• Twocases: d(k)[b]isthecostoftheshortestpath
fromstobwithatmostkedgesinit.
su
b
x2
22
a
su
b102
22
Case1:theshortestpathfromstobwithatmostkedgesactuallyhasatmostk-1edges.
Case2:theshortestpathfromstobwithatmostkedgesreallyhaskedges.
d(k)[b]=d(k-1)[b]
d(k)[b]=d(k-1)[a]+w(a,b)forsomea...
sayk=3
sayk=3
Wanttomaintain:
d(k)[b]=mina {d(k-1)[a]+w(a,b)}
Bellman-FordAlgorithm*
• Bellman-Ford*(G,s):• Initialize d(k) fork=0,…,n-1• d(0)[v]=∞ forallvotherthans• d(0)[s]=0.• Fork=1,…,n-1:
• ForbinV:• d(k)[b]←min{d(k-1)[b],mina {d(k-1)[a]+weight(a,b)}}
• Returnd(n-1)
Ifwesetd(k)[b]tobetheminimumoftheprevioustwocases,thenwemaintaintheloopinvariantthat:
d(k)[b]isthecostoftheshortestpathfromstobwithatmostkedgesinit.
Thisminimumisoverallasothat(a,b)isinE
Bellman-FordAlgorithm*Example
• Fork=1,…,n-1:• ForbinV:
• d(k)[b]←min{d(k-1)[b],mina {d(k-1)[a]+weight(a,b)}}
t-2
s
u
v
5
2
2
1
0 ∞ ∞ ∞
suvt
d(0)
suvt
d(1)
suvt
d(2)
suvt
d(3)
d(k)[b]isthecostoftheshortestpathfromstobwithatmostkedgesinit.
0
∞
∞∞
Bellman-FordAlgorithm*Example
t-2
s
u
v
5
2
2
1
0 ∞ ∞ ∞
suvt
d(0)
0 2 5 ∞
suvt
d(1)
suvt
d(2)
suvt
d(3)
0
2
5∞
• Fork=1,…,n-1:• ForbinV:
• d(k)[b]←min{d(k-1)[b],mina {d(k-1)[a]+weight(a,b)}}
d(k)[b]isthecostoftheshortestpathfromstobwithatmostkedgesinit.
Bellman-FordAlgorithm*Example
t-2
s
u
v
5
2
2
1
0 ∞ ∞ ∞
suvt
d(0)
0 2 5 ∞
suvt
d(1)
0 2 4 3
suvt
d(2)
suvt
d(3)
0
2
43
• Fork=1,…,n-1:• ForbinV:
• d(k)[b]←min{d(k-1)[b],mina {d(k-1)[a]+weight(a,b)}}
d(k)[b]isthecostoftheshortestpathfromstobwithatmostkedgesinit.
Bellman-FordAlgorithm*Example
t-2
s
u
v
5
2
2
1
0 ∞ ∞ ∞
suvt
d(0)
0 2 5 ∞
suvt
d(1)
0 2 4 3
suvt
d(2)
0 2 4 2
suvt
d(3)
0
2
42
• Fork=1,…,n-1:• ForbinV:
• d(k)[b]←min{d(k-1)[b],mina {d(k-1)[a]+weight(a,b)}}
d(k)[b]isthecostoftheshortestpathfromstobwithatmostkedgesinit.
Bellman-FordAlgorithm*Example
t-2
s
u
v
5
2
2
1
0 ∞ ∞ ∞
suvt
d(0)
0 2 5 ∞
suvt
d(1)
0 2 4 3
suvt
d(2)
0 2 4 2
suvt
d(3)
0
2
42
SANITYCHECK:• Theshortestpathwith1edgefromstothascost∞. (thereisnosuchpath).• Theshortestpathwith2edgesfromstothascost3.(s-v-t)• Theshortestpathwith3edgesfromstothascost2.(s-u-v-t)
d(k)[b]isthecostoftheshortestpathfromstobwithatmostkedgesinit.
Andthisoneistheshortestpath!!!
Howdoweactuallyimplementthis?(Thisiswhatthe*onallthepreviousslideswasfor).
• Don’tactuallykeepallthearraysd(k) around.• Justkeeptwoofthematatime,that’sallweneed.
• Runningtime:O(mn)• That’sworsethanDijkstra,butBFcanhandlenegativeedgeweights.
• Spacecomplexity:• Weneedspacetostorethegraphandtwoarraysofsizen.
*WARNING: ThisisslightlydifferentfromtheversionofBellman-FordinCLRS.Butwewillstickwithwhatwejustsawforpedagogicalreasons.SeeLectureNotes11.5(listedonthewebpageintheLecture12box)fornotesontheanalysisoftheslightlydifferentCLRSversion.
Bellman-FordAlgorithm*
• Bellman-Ford*(G,s):• Initialize d(k) fork=0,…,n-1• d(0)[v]=∞ forallvotherthans• d(0)[s]=0.• Fork=1,…,n-1:
• ForbinV:• d(k)[b]←min{d(k-1)[b],mina {d(k-1)[a]+weight(a,b)}}
• Returnd(n-1)
Whydoesitwork?
• First,we’vebeenassertingthat:
• Technically,thisrequiresproof!• We’vebasicallyalreadyseentheproof!• Itfollowsfrominductionwiththeinductivehypothesis
d(n-1)[b]isthecostoftheshortestpathfromstobwithatmostn-1edgesinit.
d(k)[b]isthecostoftheshortestpathfromstobwithatmostkedgesinit.Workoutthedetailsofthisproof
onyourown!Tohelpyou,there’sanoutlineonthenextslide.(Whichwe’llskipnow).
Sketchofproof[skipthisinlecture]thatthisthingwe’vebeenassertingisreallytrue
• Inductivehypothesis:
• Basecase:
• Inductivestep:
• Conclusion:
d(k)[b]isthecostoftheshortestpathfromstobwithatmostkedgesinit.
0 ∞ ∞ ∞Fork=0:
• d(k)[b]←min{d(k-1)[b],mina {d(k-1)[a]+weight(a,b)}}
• Ineithercase,wemakethecorrectupdate.
Case1:theshortestpathfromstobehas<kedges
Case2:theshortestpathfromstoboflengthatmostkedgeshas
exactlykedges
d(n-1)[b]isthecostoftheshortestpathfromstobwithatmostn-1edgesinit.
Whenk=n-1,theinductivehypothesisreads:
Isthistheconclusionwewant?
• WestillneedtoprovethatthisimpliesBF*iscorrect.• Wereturnd(n-1)• Needtoshowd(n-1)[a]=distance(s,a).
• Enoughtoshow:
d(n-1)[b]isthecostoftheshortestpathfromstobwithatmostn-1edgesinit.
Shortestpathwithatmostn-1edges
Shortestpathwithanynumberofedges
DANGER!
• Ifthegraphhasanegativecycle,thismightnotbetrue.• Ifthereisanegativecycle,theremaynotbeashortestpathbetweentwovertices!
t-2
s
u
v
2
2
1
-5
Shortestpathwithatmostn-1
edges
Shortestpathwithanynumberofedges
Anegativecycleisadirectedcyclewithnegativetotalcost
Butifthereisnonegativecycle
• Thennotonlyarethereshortestpaths,butactuallythere’salwaysasimple shortestpath.
• Asimplepathinagraphwithnverticeshasatmostn-1edgesinit.
“Simple”meansthatthepathhasnocyclesinit.
v
su
x
ts v
y
-2
2
3
-5
10
t
Can’taddanotheredgewithoutmakingacycle!
Thiscycleisn’thelping.Justgetridofit.
Let’sgoafteranewconclusion.
• Theorem:• TheBellman-FordAlgorithm*iscorrectaslongasGhasnonegativecycles.
*WewillprovethisforourversionofBellman-Ford.SeeNotes11.5orCLRSforCLRSversion.
Proof• Byinduction,
• Iftherearenonegativecycles,
• ThisisbecausetheshortestpathisWLOGsimple,andallsimplepathshaveatmostn-1edges.
• Sothethingwereturnisequaltothethingwewanttoreturn.
d(n-1)[b]isthecostoftheshortestpathfromstobwithatmostn-1edgesinit.
Shortestpathwithatmostn-1
edges
Shortestpathwithanynumberofedges
Sothatproves:
• Theorem:• TheBellman-FordAlgorithm*iscorrectaslongasGhasnonegativecycles.
• Further,ifGhasanegativecycle,Bellman-Fordcandetectthat.• (SeeNotes11.5)
Whathavewelearned?
• TheBellman-FordalgorithmisslowerthanDijkstra:• O(mn)time
• Butitworkswithnegativeedgesweights.• You’llseehowDijkstradoeswithnegativeedgeweightsinHW5.
• Itdoesn’tworkwithnegativecycles,butinthatcaseshortestpathsdon’tevenmakesense.
Bellman-Fordisalsousedinpractice.• eg,RoutingInformationProtocol(RIP)usessomethinglikeBellman-Ford.• Olderprotocol,notusedasmuchanymore.
• Eachrouterkeepsatable ofdistancestoeveryotherrouter.• PeriodicallywedoaBellman-Fordupdate.• Thisalsomeansthatiftherearechangesinthenetwork,thiswillpropagate.(maybeslowly…)
Destination Costtogetthere
Sendtowhom?
172.16.1.0 34 172.16.1.1
10.20.40.1 10 192.168.1.2
10.155.120.1 9 10.13.50.0
Thiswasanexampleof…
Whatisdynamicprogramming?
• Itisanalgorithmdesignparadigm• likedivide-and-conquerisanalgorithmdesignparadigm.
• Usuallyitisforsolvingoptimizationproblems• eg,shortestpath
Elementsofdynamicprogramming
• Bigproblemsbreakupintolittleproblems.• eg,Shortestpathwithatmostkedges.
• Theoptimalsolutionofaproblemcanbeexpressedintermsofoptimalsolutionsofsmallersub-problems.• eg,d(k)[b]←min{d(k-1)[b],mina {d(k-1)[a]+weight(a,b)}}
Wecallthis“optimalsub-structure”
ElementsofdynamicprogrammingII
• Thesub-problemsoverlapalot.• eg,Lotsofdifferententriesofd(k) askford(k-1)[a].• Thismeansthatwecansavetimebysolvingasub-problemjustonceandstoringtheanswer.
Wecallthis“overlappingsub-problems”
ElementsofdynamicprogrammingIII• Optimalsubstructure.• Optimalsolutionstosub-problemsaresub-solutionstotheoptimalsolutionoftheoriginalproblem.
• Overlappingsubproblems.• Thesubproblems showupagainandagain
• Usingtheseproperties,wecandesignadynamicprogrammingalgorithm:• Keepatableofsolutionstothesmallerproblems.• Usethesolutionsinthetabletosolvebiggerproblems.• Attheendwecanuseinformationwecollectedalongthewaytofindtheoptimalsolution.• eg, recovertheshortestpath(notjustitscost).
Twowaystothinkaboutand/orimplement DPalgorithms
• Topdown
•Bottomup
This picture isn’t hugely relevant but I like it.
Bottomupapproach
• Whatwejustsaw.• Solvethesmallproblemsfirst• fillind(0)
• Thenbiggerproblems• fillind(1)
• …• Thenbiggerproblems• fillind(n-2)
• Thenfinallysolvetherealproblem.• fillind(n-1)
Topdownapproach
• Thinkofitlikearecursivealgorithm.• Tosolvethebigproblem:• Recurse tosolvesmallerproblems
• Thoserecurse tosolvesmallerproblems• etc..
• Thedifferencefromdivideandconquer:• Memo-ization• Keeptrackofwhatsmallproblemsyou’vealreadysolvedtopreventre-solvingthesameproblemtwice.
Example:top-down**versionofBF*
• Bellman-Ford*(G,s):• Initializeabunchofemptytablesd(k) fork=0,…,n-1,• Fillind(0)
• for binV:• BF*_helper(G,s,b,n-1)
• BF*_helper(G,s,b,k):• Foreachasothat(a,b)inE,andalsofora=b:
• If d(k-1)[a]isnotalreadyinthetable:• d(k-1)[a]=BF*_helper(G,s,a,k-1)
• returnmin{d(k-1)[b],mina {d(k-1)[a]+weight(a,b)}}
*NottheactualBellman-Fordalgorithm;wedon’twanttokeepallthesetablesaround**ProbablynotthebestwaytothinkaboutBellman-Ford:thisisforDPpedagogyonly!
Theactualpseudocodehereisn’timportant,Ijustwanttotalkaboutthestructureofit.
Visualizationtop-downapproach
d(n-1)[u] d(n-1)[v]
d(n-2)[a]d(n-2)[a]d(n-2)[y]d(n-2)[x] d(n-2)[x]
d(n-3)[v]d(n-3)[t]d(n-3)[z] d(n-3)[v]d(n-3)[x]
Thisisareallybigrecursiontree!Naively,nlayers,soatleast2n
time!
Visualizationtop-downapproach
d(n-1)[u] d(n-1)[v]
d(n-2)[a]d(n-2)[y]d(n-2)[x]
d(n-3)[v]d(n-3)[t]d(n-3)[z]d(n-3)[x]
Nowit’samuchsmaller
“recursionDAG!”
d(n-2)[z]
Whathavewelearned?
• Paradigminalgorithmdesign.• Usefulwhenthere’soptimalsubstructure:
• optimalsolutionstoabigproblembreakupintooptimalsub-solutionsofsubproblems.
• Usefulwhenthereareoverlappingsubproblems:• Usememo-ization (aka,putitinatable)topreventrepeatedwork.
• Canbeimplementedbottom-up ortop-down.• It’safancynameforaprettycommon-senseidea:
• Don’tduplicateworkifyoudon’thaveto!
Why“dynamicprogramming”?
• Programming referstofindingtheoptimal“program.”• asin,ashortestrouteisaplan akaaprogram.
• Dynamic referstothefactthatit’smulti-stage.• Butalsoit’sjustafancy-soundingname.
Manipulatingcomputercodeinanactionmovie?
Why“dynamicprogramming”?
• RichardBellmaninventedthenameinthe1950’s.• Atthetime,hewasworkingfortheRANDCorporation,whichwasbasicallyworkingfortheAirForce,andgovernmentprojectsneededflashynamestogetfunded.• FromBellman’sautobiography:• “It’s impossible to use the word, dynamic, in the
pejorative sense…I thought dynamic programming was a good name. It was something not even a Congressman could object to.”
Anotherexample
• Floyd-Warshall Algorithm• ThisisanalgorithmforAll-PairsShortestPaths(APSP)• Thatis,IwanttoknowtheshortestpathfromutovforALLpairsu,v ofverticesinthegraph.• Notjustfromaspecialsinglesources.
t-2
s
u
v
5
2
2
1
s u v t
s 0 2 4 2
u 1 0 2 0
v ∞ ∞ 0 -2
t ∞ ∞ ∞ 0
Source
Destination
Anotherexample
• Floyd-Warshall Algorithm• ThisisanalgorithmforAll-PairsShortestPaths(APSP)• Thatis,IwanttoknowtheshortestpathfromutovforALLpairsu,v ofverticesinthegraph.• Notjustfromaspecialsinglesources.
• Naïvesolution(ifwewanttohandlenegativeedgeweights):• ForallsinG:
• RunBellman-FordonGstartingats.• TimeO(n⋅nm)=O(n2m),
• maybeasbadasn4 ifm=n2
Optimalsubstructure
k-1
2
…
1
3
kk+1
uv
n
Labelthevertices1,2,…,n(Weomitedgesinthe
picturebelow).
LetD(k-1)[u,v]bethesolutiontothissub-problem.
Thisistheshortestpathfromutovthroughtheblueset.Ithaslength
D(k-1)[u,v]
Sub-problem:Forallpairs,u,v,findthecostoftheshortestpathfromutov,sothatalltheinternalverticesonthatpatharein{1,…,k-1}.
Optimalsubstructure
k-1
2
…
1
3
kk+1
uv
n
Sub-problem:Forallpairs,u,v,findthecostoftheshortestpathfromutov,sothatalltheinternalverticesonthatpatharein{1,…,k-1}.
Labelthevertices1,2,…,n(Weomitedgesinthe
picturebelow).
LetD(k-1)[u,v]bethesolutiontothissub-problem.
Thisistheshortestpathfromutovthroughtheblueset.Ithaslength
D(k-1)[u,v]
Question:HowcanwefindD(k)[u,v]usingD(k-1)?
HowcanwefindD(k)[u,v]usingD(k-1)?
k-1
2
…
1
3
kk+1
uv
n
D(k)[u,v]isthecostoftheshortestpathfromutovsothatallinternalverticesonthatpatharein{1,…,k}.
HowcanwefindD(k)[u,v]usingD(k-1)?
k-1
2
…
1
3
kk+1
uv
n
D(k)[u,v]isthecostoftheshortestpathfromutovsothatallinternalverticesonthatpatharein{1,…,k}.
Case1:wedon’tneedvertexk.
D(k)[u,v]=D(k-1)[u,v]
HowcanwefindD(k)[u,v]usingD(k-1)?
k-1
2
…
1
3
kk+1
uv
n
D(k)[u,v]isthecostoftheshortestpathfromutovsothatallinternalverticesonthatpatharein{1,…,k}.
Case2:weneedvertexk.
Case2continued
k-1
2
…
1
3
k
uv
n
• Supposetherearenonegativecycles.• ThenWLOGtheshortestpathfrom
utovthrough{1,…,k}issimple.
• Ifthatpath passesthroughk,itmustlooklikethis:
• Thispathistheshortestpathfromutokthrough{1,…,k-1}.• sub-pathsofshortestpathsare
shortestpaths• Sameforthispath.
Case2:weneedvertexk.
D(k)[u,v]=D(k-1)[u,k]+D(k-1)[k,v]
HowcanwefindD(k)[u,v]usingD(k-1)?
• D(k)[u,v]=min{D(k-1)[u,v],D(k-1)[u,k] +D(k-1)[k,v]}
• Optimalsubstructure:• Wecansolvethebigproblemusingsmallerproblems.
• Overlappingsub-problems:• D(k-1)[k,v]canbeusedtohelpcomputeD(k)[u,v]forlotsofdifferentu’s.
Case1:Costofshortestpath
through{1,…,k-1}
Case2:Costofshortestpathfromutokandthenfromktov
through{1,…,k-1}
HowcanwefindD(k)[u,v]usingD(k-1)?
• D(k)[u,v]=min{D(k-1)[u,v],D(k-1)[u,k] +D(k-1)[k,v]}
• Usingour paradigm,thisimmediatelygivesusanalgorithm!
Case1:Costofshortestpath
through{1,…,k-1}
Case2:Costofshortestpathfromutokandthenfromktov
through{1,…,k-1}
Floyd-Warshall algorithm
• Initializen-by-narraysD(k)fork=0,…,n• D(k)[u,u]=0forallu,forallk• D(k)[u,v]=∞ forallu≠ v,forallk• D(0)[u,v]=weight(u,v)forall(u,v)inE.
• For k=1,…,n:• For pairsu,v inV2:• D(k)[u,v]=min{D(k-1)[u,v],D(k-1)[u,k] +D(k-1)[k,v]}
• Return D(n)
Thebasecasechecksout:the
onlypaththroughzerootherverticesareedgesdirectly
fromutov.
Thisisabottom-up algorithm.
We’vebasicallyjustshown• Theorem:
IftherearenonegativecyclesinaweighteddirectedgraphG,thentheFloyd-Warshall algorithm,runningonG,returnsamatrixD(n) sothat:
D(n)[u,v]=distancebetweenuandvinG.
• Runningtime:O(n3)• BetterthanrunningBFntimes!• NotreallybetterthanrunningDijkstrantimes.
• Butit’ssimplertoimplementandhandlesnegativeweights.
• Storage:• Enoughtoholdtwo n-by-narrays,andtheoriginalgraph.
Workoutthedetailsoftheproof!(OrseeLecture
Notes12forafewmoredetails).
AswithBellman-Ford,wedon’treallyneedtostoreallnoftheD(k).
Whatifthereare negativecycles?
• JustlikeBellman-Ford,Floyd-Warshall candetectnegativecycles.• Ifthereisanegativecycle,thenthereisapathfromvtovthatgoesthroughallnverticesthathascost<0.• That’sjustthedefinitionofanegativecycle.
• SoD(n)[v,v]<0.• Socheckforthatattheend.• ifthereissuchav,returnnegative cycle.
Whathavewelearned?
• TheFloyd-Warshall algorithmisanotherexampleofdynamicprogramming.
• ItcomputesAllPairsShortestPathsinadirectedweightedgraphintimeO(n3).
AnotherExample?
• Longestsimplepath(sayalledgeweightsare1):
b
a
Whatisthelongestsimplepathfromstot?
s
t
Thisisanoptimizationproblem…
• CanweuseDynamicProgramming?• OptimalSubstructure?• Longestpathfromstot=longestpathfromstoa
+longestpathfromatot?
b
as
t
NOPE!
Thisdoesn’twork
• Thesubproblems wecameupwitharen’t independent:• Oncewe’vechosenthelongestpathfromatot
• whichusesb,• ourlongestpathfromstoashouldn’tbeallowedtouseb
• sinceb wasalreadyused.
b
as
t
Whatwentwrong?
• Actually,thelongestsimplepathproblemisNP-complete.• Wedon’tknowofanypolynomial-timealgorithmsforit,DPorotherwise!
Recap
• Twomoreshortest-pathalgorithms:• Bellman-Fordforsingle-sourceshortestpath• Floyd-Warshall forall-pairsshortestpath
• Dynamicprogramming!• Thisisafancynamefor:
• Breakupanoptimizationproblemintosmallerproblems• Theoptimalsolutionstothesub-problemsshouldbesub-solutionstotheoriginalproblem.
• Buildtheoptimalsolutioniterativelybyfillinginatableofsub-solutions.
• Takeadvantageofoverlappingsub-problems!
Nexttime
• Moreexamplesofdynamicprogramming!
Wewillstopbulletswithouraction-packedcodingskills,andalsomaybefindlongestcommonsubsequences.