
Formed in 2009, the Archive Team (not to be confused with the archive.org Archive-It Team) is a rogue archivist collective dedicated to saving copies of rapidly dying or deleted websites for the sake of history and digital heritage. The group is 100% composed of volunteers and interested parties, and has expanded into a large amount of related projects for saving online and digital history.
History is littered with hundreds of conflicts over the future of a community, group, location or business that were "resolved" when one of the parties stepped ahead and destroyed what was there. With the original point of contention destroyed, the debates would fall to the wayside. Archive Team believes that by duplicated condemned data, the conversation and debate can continue, as well as the richness and insight gained by keeping the materials. Our projects have ranged in size from a single volunteer downloading the data to a small-but-critical site, to over 100 volunteers stepping forward to acquire terabytes of user-created data to save for future generations.
The main site for Archive Team is at archiveteam.org and contains up to the date information on various projects, manifestos, plans and walkthroughs.
This collection contains the output of many Archive Team projects, both ongoing and completed. Thanks to the generous providing of disk space by the Internet Archive, multi-terabyte datasets can be made available, as well as in use by the Wayback Machine, providing a path back to lost websites and work.
Our collection has grown to the point of having sub-collections for the type of data we acquire. If you are seeking to browse the contents of these collections, the Wayback Machine is the best first stop. Otherwise, you are free to dig into the stacks to see what you may find.
The Archive Team Panic Downloads are full pulldowns of currently extant websites, meant to serve as emergency backups for needed sites that are in danger of closing, or which will be missed dearly if suddenly lost due to hard drive crashes or server failures.
Where I'm at
I'm been hacking at the "vomit draft" version of this tool (which I'll explain more about below, if you're interested). It's at a point now where I've proved the basic premise to myself, so I'm ready to switch out my messy first draft for something with a bit more formal correctness. In particular, the current version naively converts raw JSON to strings, concatenates them, and saves them as a .svelte file.
Where I want to be
Here's pseudo-code for what I want to end up with:
I feel like I understand how all these pieces fit together, and have done a bunch of tutorial work, but none of the examples seems to quite fit my scenario, and whenever I dive in hoping that I'll figure it out along the way, my understanding gets swamped by Unified's sheer scale.
The question (for now)
What are the pieces I'm going to need, and where should I look to find the simplest model(s) to follow in putting them together?
I feel like I'm super close to getting this, but right now my brain just keeps getting overwhelmed by the abstractness of it all.😬
Background
I'm not sure if this helps at all, but here's the higher-level view of what I'm trying to do. Maybe it'll interest you. Maybe it'll just help make it clear that I'm going about this all wrong. Anyway, here's the possibly-irrelevant details of why I'm making
notion2svelte
.The first part's straightforward: I find writing to be easier in Notion than any of the other tools I've tried, including heavy hitters like Ulysses and Scrivener. So, input-wise, Notion.so is the choice, which also makes it my de facto CMS.
Here's where I stray slightly from conventional wisdom, which would have me making API calls to Notion from the client's machine in order to hydrate pages with the most up-to-date content. What I want to do instead…because I'm a control freak?…is pull any pages that I've marked in Notion as Ready To Publish, transform them (I think, using unified?) into Svelte pages, where the content (and sometimes settings such as "checked" for a to-do item) get translated into simple inputs to pre-conceived Svelte components, similar to what I've already implemented, and spitting the results out as .svelte files. This will allow me to store both the raw JSON and rendered Svelte in .git, which works well for what's likely to be an entirely static site, and gives me full control of the end product.
Ideally, the tools I create along the way can be generic enough for others to make use of them, even if they prefer to work with streams rather than files because, say, they prefer for the site to pull pages from Notion on the fly. ¯_(ツ)_/¯
Very interesting, have you talked at all with the maintainer(s) of https://github.com/phuctm97/ntast and/or https://github.com/dragonman225/nast?
In short two plugins a parser and a compiler, breaking it down into utilities beyond those two is optional.
A parser in unified is something that takes in a string/text and outputs an AST, a compiler takes an AST an generates a string (or a svelte component).
There is some high level API info at https://github.com/unifiedjs/unified#processorparser and https://github.com/unifiedjs/unified#processorcompiler
There are s…