
Formed in 2009, the Archive Team (not to be confused with the archive.org Archive-It Team) is a rogue archivist collective dedicated to saving copies of rapidly dying or deleted websites for the sake of history and digital heritage. The group is 100% composed of volunteers and interested parties, and has expanded into a large amount of related projects for saving online and digital history.
History is littered with hundreds of conflicts over the future of a community, group, location or business that were "resolved" when one of the parties stepped ahead and destroyed what was there. With the original point of contention destroyed, the debates would fall to the wayside. Archive Team believes that by duplicated condemned data, the conversation and debate can continue, as well as the richness and insight gained by keeping the materials. Our projects have ranged in size from a single volunteer downloading the data to a small-but-critical site, to over 100 volunteers stepping forward to acquire terabytes of user-created data to save for future generations.
The main site for Archive Team is at archiveteam.org and contains up to the date information on various projects, manifestos, plans and walkthroughs.
This collection contains the output of many Archive Team projects, both ongoing and completed. Thanks to the generous providing of disk space by the Internet Archive, multi-terabyte datasets can be made available, as well as in use by the Wayback Machine, providing a path back to lost websites and work.
Our collection has grown to the point of having sub-collections for the type of data we acquire. If you are seeking to browse the contents of these collections, the Wayback Machine is the best first stop. Otherwise, you are free to dig into the stacks to see what you may find.
The Archive Team Panic Downloads are full pulldowns of currently extant websites, meant to serve as emergency backups for needed sites that are in danger of closing, or which will be missed dearly if suddenly lost due to hard drive crashes or server failures.
Given a binary tree rooted at
root
, the depth of each node is the shortest distance to the root.A node is deepest if it has the largest depth possible among any node in the entire tree.
The subtree of a node is that node, plus the set of all descendants of that node.
Return the node with the largest depth such that it contains all the deepest nodes in its subtree.
Example 1:
Note:
这道题给了我们一棵二叉树,让我们找包含所有最深结点的最小子树,就是返回这棵最小子树的根结点。题目中给了一个例子,因为有图,所以可以很直接的看出来最深的结点是7和4,那么包含这两个结点的最小子树的根结点是2,返回即可。其实最深的结点不一定只有两个,可能有很多个,比如对于一棵完全二叉树,即把例子图中的结点7和4去掉后,此时最深的结点就有四个,分别是6,2,0,8,都包含这些结点的子树就是原树本身了,要返回根结点。
通过上述分析,可以发现,子树的最大深度很重要,对于一棵完全二叉树来说,根结点的左右子树的最大深度一定是相同的,此时直接返回根结点即可。若左右子树的最大深度不同,则最深结点一定位于深度大的子树中,可以对其调用递归函数。所以只需要写一个计算最大深度的递归函数,来计算左右子树的最大深度差,再根据这个差值来决定对谁调用当前的递归函数,两个递归函数相互缠绕,画面美极了,参见代码如下:
解法一:
上面的解法其实并不高效,因为对于每个结点,都要统计其左右子树的最大深度,有大量的重复计算存在,我们来尝试提高时间复杂度,就不可避免的要牺牲一些空间。递归函数需要返回一个 pair,由每个结点的最大深度,以及包含最深结点的最小子树组成。所以在原函数中,对根结点调用递归函数,并取返回的 pair 中的第二项。
在递归函数中,首先判断结点是否存在,为空的话直接返回一个 {0, NULL} 对儿。否则分别对左右子结点调用递归函数,将各自的返回的 pair 存入 left 和 right 中,然后先分别在 left 和 right 中取出左右子树的最大深度 d1 和 d2,之后就要建立返回值的 pair,第一项为当前结点的最大深度,由左右子树中的最大深度加1组成,而包含最深结点的最小子树由 d1 和 d2 值的大小决定,若 d1>d2,则为 left.second,否则为 right.second,这样我们就把原本的两个递归,揉合到了一个递归函数中,大大提高了运行效率,参见代码如下:
解法二:
Github 同步地址:
#865
参考资料:
https://leetcode.com/problems/smallest-subtree-with-all-the-deepest-nodes/
https://leetcode.com/problems/smallest-subtree-with-all-the-deepest-nodes/discuss/146808/One-pass
https://leetcode.com/problems/smallest-subtree-with-all-the-deepest-nodes/discuss/146786/Simple-recursive-Java-Solution
https://leetcode.com/problems/smallest-subtree-with-all-the-deepest-nodes/discuss/146842/Short-and-concise-C%2B%2B-solution-using-DFS-3~5-lines
LeetCode All in One 题目讲解汇总(持续更新中...)
The text was updated successfully, but these errors were encountered: