
Formed in 2009, the Archive Team (not to be confused with the archive.org Archive-It Team) is a rogue archivist collective dedicated to saving copies of rapidly dying or deleted websites for the sake of history and digital heritage. The group is 100% composed of volunteers and interested parties, and has expanded into a large amount of related projects for saving online and digital history.
History is littered with hundreds of conflicts over the future of a community, group, location or business that were "resolved" when one of the parties stepped ahead and destroyed what was there. With the original point of contention destroyed, the debates would fall to the wayside. Archive Team believes that by duplicated condemned data, the conversation and debate can continue, as well as the richness and insight gained by keeping the materials. Our projects have ranged in size from a single volunteer downloading the data to a small-but-critical site, to over 100 volunteers stepping forward to acquire terabytes of user-created data to save for future generations.
The main site for Archive Team is at archiveteam.org and contains up to the date information on various projects, manifestos, plans and walkthroughs.
This collection contains the output of many Archive Team projects, both ongoing and completed. Thanks to the generous providing of disk space by the Internet Archive, multi-terabyte datasets can be made available, as well as in use by the Wayback Machine, providing a path back to lost websites and work.
Our collection has grown to the point of having sub-collections for the type of data we acquire. If you are seeking to browse the contents of these collections, the Wayback Machine is the best first stop. Otherwise, you are free to dig into the stacks to see what you may find.
The Archive Team Panic Downloads are full pulldowns of currently extant websites, meant to serve as emergency backups for needed sites that are in danger of closing, or which will be missed dearly if suddenly lost due to hard drive crashes or server failures.
Given a reference of a node in a connected undirected graph, return a deep copy (clone) of the graph. Each node in the graph contains a val (
int
) and a list (List[Node]
) of its neighbors.Example:
Note:
这道无向图的复制问题和之前的 Copy List with Random Pointer 有些类似,那道题的难点是如何处理每个结点的随机指针,这道题目的难点在于如何处理每个结点的 neighbors,由于在深度拷贝每一个结点后,还要将其所有 neighbors 放到一个 vector 中,而如何避免重复拷贝呢?这道题好就好在所有结点值不同,所以我们可以使用 HashMap 来对应原图中的结点和新生成的克隆图中的结点。对于图的遍历的两大基本方法是深度优先搜索 DFS 和广度优先搜索 BFS,这里我们先使用深度优先搜索DFS来解答此题,在递归函数中,首先判空,然后再看当前的结点是否已经被克隆过了,若在 HashMap 中存在,则直接返回其映射结点。否则就克隆当前结点,并在 HashMap 中建立映射,然后遍历当前结点的所有 neihbor 结点,调用递归函数并且加到克隆结点的 neighbors 数组中即可,代码如下:
解法一:
我们也可以使用 BFS 来遍历图,使用队列 queue 进行辅助,还是需要一个 HashMap 来建立原图结点和克隆结点之间的映射。先克隆当前结点,然后建立映射,并加入 queue 中,进行 while 循环。在循环中,取出队首结点,遍历其所有 neighbor 结点,若不在 HashMap 中,我们根据 neigbor 结点值克隆一个新 neighbor 结点,建立映射,并且排入 queue 中。然后将 neighbor 结点在 HashMap 中的映射结点加入到克隆结点的 neighbors 数组中即可,参见代码如下:
解法二:
类似题目:
Copy List with Random Pointer
参考资料:
https://leetcode.com/problems/clone-graph/
https://leetcode.com/problems/clone-graph/discuss/42313/C%2B%2B-BFSDFS
https://leetcode.com/problems/clone-graph/discuss/42309/Depth-First-Simple-Java-Solution
LeetCode All in One 题目讲解汇总(持续更新中...)
The text was updated successfully, but these errors were encountered: