
Formed in 2009, the Archive Team (not to be confused with the archive.org Archive-It Team) is a rogue archivist collective dedicated to saving copies of rapidly dying or deleted websites for the sake of history and digital heritage. The group is 100% composed of volunteers and interested parties, and has expanded into a large amount of related projects for saving online and digital history.
History is littered with hundreds of conflicts over the future of a community, group, location or business that were "resolved" when one of the parties stepped ahead and destroyed what was there. With the original point of contention destroyed, the debates would fall to the wayside. Archive Team believes that by duplicated condemned data, the conversation and debate can continue, as well as the richness and insight gained by keeping the materials. Our projects have ranged in size from a single volunteer downloading the data to a small-but-critical site, to over 100 volunteers stepping forward to acquire terabytes of user-created data to save for future generations.
The main site for Archive Team is at archiveteam.org and contains up to the date information on various projects, manifestos, plans and walkthroughs.
This collection contains the output of many Archive Team projects, both ongoing and completed. Thanks to the generous providing of disk space by the Internet Archive, multi-terabyte datasets can be made available, as well as in use by the Wayback Machine, providing a path back to lost websites and work.
Our collection has grown to the point of having sub-collections for the type of data we acquire. If you are seeking to browse the contents of these collections, the Wayback Machine is the best first stop. Otherwise, you are free to dig into the stacks to see what you may find.
The Archive Team Panic Downloads are full pulldowns of currently extant websites, meant to serve as emergency backups for needed sites that are in danger of closing, or which will be missed dearly if suddenly lost due to hard drive crashes or server failures.
LeetCode wants to give one of its best employees the option to travel among N cities to collect algorithm problems. But all work and no play makes Jack a dull boy, you could take vacations in some particular cities and weeks. Your job is to schedule the traveling to maximize the number of vacation days you could take, but there are certain rules and restrictions you need to follow.
Rules and restrictions:
You're given the flights matrix and days matrix, and you need to output the maximum vacation days you could take during K weeks.
Example 1:
Example 2:
Example 3:
Note:
这道题给了我们一个NxN的数组,表示城市i是否有飞机直达城市j,又给了我们一个NxK的数组days,表示在第j周能在城市i休假的天数,让我们找出一个行程能使我们休假的天数最大化。开始尝试写了个递归的暴力破解法,结果TLE了。其实这道题比较适合用DP来解,我们建立一个二维DP数组,其中dp[i][j]表示目前是第j周,并且在此时在城市i,总共已经获得休假的总日子数。我们采取从后往前更新的方式(不要问我为什么,因为从前往后更新的写法要复杂一些),我们从第k周开始往第一周遍历,那么最后结果都累加在了dp[i][0]中,i的范围是[0, n-1],找出其中的最大值就是我们能休息的最大假期数了。难点就在于找递推式了,我们想dp[i][j]表示的是当前是第j周并在城市i已经获得的休假总日子数,那么上一个状态,也就是j+1周(因为我们是从后往前更新),跟当前状态有何联系,上一周我们可能还在城市i,也可能在其他城市p,那么在其他城市p的条件是,城市p有直飞城市i的飞机,那么我们可以用上一个状态的值dp[p][j+1]来更新当前值dp[i][j],还要注意的是我们要从倒数第二周开始更新,因为倒数第一周没有上一个状态,还有就是每个状态dp[i][j]都初始化赋为days[i][j]来更新,这样一旦没有任何城市可以直飞当前城市,起码我们还可以享受当前城市的假期,最后要做的就是想上面所说在dp[i][0]中找最大值,下面的代码是把这一步融合到了for循环中,所以加上了一堆判断条件,我们也可以在dp数组整个更新结束之后再来找最大值,参见代码如下:
解法一:
下面这种方法优化了空间复杂度,只用了一个一维的DP数组,其中dp[i]表示在当前周,在城市i时已经获得的最大假期数,并且除了第一个数初始化为0,其余均初始化为整型最小值,然后我们从第一周往后遍历,我们新建一个临时数组t,初始化为整型最小值,然后遍历每一个城市,对于每一个城市,我们遍历其他所有城市,看是否有飞机能直达当前城市,或者就是当前的城市,我们用dp[p] + days[i][j]来更更新dp[i],当每个城市都遍历完了之后,我们将t整个赋值给dp,然后进行下一周的更新,最后只要在dp数组中找出最大值返回即可,这种写法不但省空间,而且也相对简洁一些,很赞啊~
解法二:
之前提到了递归的DFS会TLE,但是如果我们使用一个memo数组来保存中间计算结果,就能省去大量的重复计算,并且能够通过OJ,解题思想跟解法一非常的类似,参见代码如下:
解法三:
参考资料:
https://discuss.leetcode.com/topic/87865/java-dfs-tle-and-dp-solutions/2
https://discuss.leetcode.com/topic/87869/c-clean-code-graphic-explanation
https://discuss.leetcode.com/topic/89353/short-java-recursion-dfs-memoization
LeetCode All in One 题目讲解汇总(持续更新中...)
The text was updated successfully, but these errors were encountered: