
Formed in 2009, the Archive Team (not to be confused with the archive.org Archive-It Team) is a rogue archivist collective dedicated to saving copies of rapidly dying or deleted websites for the sake of history and digital heritage. The group is 100% composed of volunteers and interested parties, and has expanded into a large amount of related projects for saving online and digital history.
History is littered with hundreds of conflicts over the future of a community, group, location or business that were "resolved" when one of the parties stepped ahead and destroyed what was there. With the original point of contention destroyed, the debates would fall to the wayside. Archive Team believes that by duplicated condemned data, the conversation and debate can continue, as well as the richness and insight gained by keeping the materials. Our projects have ranged in size from a single volunteer downloading the data to a small-but-critical site, to over 100 volunteers stepping forward to acquire terabytes of user-created data to save for future generations.
The main site for Archive Team is at archiveteam.org and contains up to the date information on various projects, manifestos, plans and walkthroughs.
This collection contains the output of many Archive Team projects, both ongoing and completed. Thanks to the generous providing of disk space by the Internet Archive, multi-terabyte datasets can be made available, as well as in use by the Wayback Machine, providing a path back to lost websites and work.
Our collection has grown to the point of having sub-collections for the type of data we acquire. If you are seeking to browse the contents of these collections, the Wayback Machine is the best first stop. Otherwise, you are free to dig into the stacks to see what you may find.
The Archive Team Panic Downloads are full pulldowns of currently extant websites, meant to serve as emergency backups for needed sites that are in danger of closing, or which will be missed dearly if suddenly lost due to hard drive crashes or server failures.
Design a search autocomplete system for a search engine. Users may input a sentence (at least one word and end with a special character
'#'
). For each character they type except '#', you need to return the top 3historical hot sentences that have prefix the same as the part of sentence already typed. Here are the specific rules:Your job is to implement the following functions:
The constructor function:
AutocompleteSystem(String[] sentences, int[] times):
This is the constructor. The input is historical data.Sentences
is a string array consists of previously typed sentences.Times
is the corresponding times a sentence has been typed. Your system should record these historical data.Now, the user wants to input a new sentence. The following function will provide the next character the user types:
List<String> input(char c):
The inputc
is the next character typed by the user. The character will only be lower-case letters ('a'
to'z'
), blank space (' '
) or a special character ('#'
). Also, the previously typed sentence should be recorded in your system. The output will be the top 3 historical hot sentences that have prefix the same as the part of sentence already typed.Example:
Operation: AutocompleteSystem(["i love you", "island","ironman", "i love leetcode"], [5,3,2,2])
The system have already tracked down the following sentences and their corresponding times:
"i love you"
:5
times"island"
:3
times"ironman"
:2
times"i love leetcode"
:2
timesNow, the user begins another search:
Operation: input('i')
Output: ["i love you", "island","i love leetcode"]
Explanation:
There are four sentences that have prefix
"i"
. Among them, "ironman" and "i love leetcode" have same hot degree. Since' '
has ASCII code 32 and'r'
has ASCII code 114, "i love leetcode" should be in front of "ironman". Also we only need to output top 3 hot sentences, so "ironman" will be ignored.Operation: input(' ')
Output: ["i love you","i love leetcode"]
Explanation:
There are only two sentences that have prefix
"i "
.Operation: input('a')
Output: []
Explanation:
There are no sentences that have prefix
"i a"
.Operation: input('#')
Output: []
Explanation:
The user finished the input, the sentence
"i a"
should be saved as a historical sentence in system. And the following input will be counted as a new search.Note:
这道题让实现一个简单的搜索自动补全系统,当我们用谷歌或者百度进行搜索时,会有这样的体验,输入些单词,搜索框会弹出一些以你输入为开头的一些完整的句子供你选择,这就是一种搜索自动补全系统。根据题目的要求,补全的句子是按之前出现的频率排列的,高频率的出现在最上面,如果频率相同,就按字母顺序来显示。输入规则是每次输入一个字符,然后返回自动补全的句子,如果遇到井字符,表示完整句子结束。那么肯定需要一个 HashMap,建立句子和其出现频率的映射,还需要一个字符串 data,用来保存之前输入过的字符。在构造函数中,给了一些句子,和其出现的次数,直接将其加入 HashMap,然后 data 初始化为空字符串。在 input 函数中,首先判读输入字符是否为井字符,如果是的话,那么表明当前的 data 字符串已经是一个完整的句子,在 HashMap 中次数加1,并且 data 清空,返回空集。否则的话将当前字符加入 data 字符串中,现在就要找出包含 data 前缀的前三高频句子了,使用优先队列来做,设计的思路是,始终用优先队列保存频率最高的三个句子,应该把频率低的或者是字母顺序大的放在队首,以便随时可以移出队列,所以应该是个最小堆,队列里放句子和其出现频率的 pair 对儿,并且根据其频率大小进行排序,要重写优先队列的 comparator。然后遍历 HashMap 中的所有句子,首先要验证当前 data 字符串是否是其前缀,没啥好的方法,就逐个字符比较,用标识符 matched,初始化为 true,如果发现不匹配,则 matched 标记为 false,并 break 掉。然后判断如果 matched 为 true 的话,说明 data 字符串是前缀,那么就把这个 pair 加入优先队列中,如果此时队列中的元素大于三个,那把队首元素移除,因为是最小堆,所以频率小的句子会被先移除。然后就是将优先队列的元素加到结果 res 中,由于先出队列的是频率小的句子,所以要加到结果 res 的末尾,参见代码如下:
Github 同步地址:
#642
类似题目:
Implement Trie (Prefix Tree)
Top K Frequent Words
参考资料:
https://leetcode.com/problems/design-search-autocomplete-system/
https://leetcode.com/problems/design-search-autocomplete-system/discuss/176550/Java-simple-solution-without-using-Trie-(only-use-HashMap-and-PriorityQueue)
https://leetcode.com/problems/design-search-autocomplete-system/discuss/105379/Straight-forward-hash-table-%2B-priority-queue-solution-in-c%2B%2B-no-trie
LeetCode All in One 题目讲解汇总(持续更新中...)
The text was updated successfully, but these errors were encountered: