
Formed in 2009, the Archive Team (not to be confused with the archive.org Archive-It Team) is a rogue archivist collective dedicated to saving copies of rapidly dying or deleted websites for the sake of history and digital heritage. The group is 100% composed of volunteers and interested parties, and has expanded into a large amount of related projects for saving online and digital history.
History is littered with hundreds of conflicts over the future of a community, group, location or business that were "resolved" when one of the parties stepped ahead and destroyed what was there. With the original point of contention destroyed, the debates would fall to the wayside. Archive Team believes that by duplicated condemned data, the conversation and debate can continue, as well as the richness and insight gained by keeping the materials. Our projects have ranged in size from a single volunteer downloading the data to a small-but-critical site, to over 100 volunteers stepping forward to acquire terabytes of user-created data to save for future generations.
The main site for Archive Team is at archiveteam.org and contains up to the date information on various projects, manifestos, plans and walkthroughs.
This collection contains the output of many Archive Team projects, both ongoing and completed. Thanks to the generous providing of disk space by the Internet Archive, multi-terabyte datasets can be made available, as well as in use by the Wayback Machine, providing a path back to lost websites and work.
Our collection has grown to the point of having sub-collections for the type of data we acquire. If you are seeking to browse the contents of these collections, the Wayback Machine is the best first stop. Otherwise, you are free to dig into the stacks to see what you may find.
The Archive Team Panic Downloads are full pulldowns of currently extant websites, meant to serve as emergency backups for needed sites that are in danger of closing, or which will be missed dearly if suddenly lost due to hard drive crashes or server failures.
Given a nested list of integers, return the sum of all integers in the list weighted by their depth.
Each element is either an integer, or a list -- whose elements may also be integers or other lists.
Example 1:
Given the list
[[1,1],2,[1,1]]
, return 10. (four 1's at depth 2, one 2 at depth 1)Example 2:
Given the list
[1,[4,[6]]]
, return 27. (one 1 at depth 1, one 4 at depth 2, and one 6 at depth 3; 1 + 42 + 63 = 27)这道题定义了一种嵌套链表的结构,链表可以无限往里嵌套,规定每嵌套一层,深度加1,让我们求权重之和,就是每个数字乘以其权重,再求总和。那么我们考虑,由于嵌套层数可以很大,所以我们用深度优先搜索DFS会很简单,每次遇到嵌套的,递归调用函数,一层一层往里算就可以了,我最先想的方法是遍历给的嵌套链表的数组,对于每个嵌套链表的对象,调用getSum函数,并赋深度值1,累加起来返回。在getSum函数中,首先判断其是否为整数,如果是,则返回当前深度乘以整数,如果不是,那么我们再遍历嵌套数组,对每个嵌套链表再调用递归函数,将返回值累加起来返回即可,参见代码如下:
解法一:
但其实上面的方法可以优化,我们可以把给的那个嵌套链表的一维数组直接当做一个嵌套链表的对象,然后调用递归函数,递归函数的处理方法跟上面一样,只不过用了个三元处理使其看起来更加简洁了一些:
解法二:
参考资料:
https://leetcode.com/discuss/94956/2ms-easy-to-understand-java-solution
LeetCode All in One 题目讲解汇总(持续更新中...)
The text was updated successfully, but these errors were encountered: