You can even use the Replit URL to run your job, as long as the Repl is running. Make sure to replace YOUR_DISCORD_WEBHOOK_URL with your Discord webhook URL, and then you can run the following in your terminal:
# This assumes your API key is set in the current env# BOOPER_API_KEY=sk_...curl--location--request POST 'https://scheduler.booper.dev/api/jobs'\--header"Content-Type: application/json"\--header"Authorization: Bearer $BOOPER_API_KEY"\--data-raw '{"method":"post","url":"https://hn.reichertjalex.repl.co/api/run","body":{"min":200,"webhook": YOUR_DISCORD_WEBHOOK_URL,"exclude":"$previous.sent"},"repeat_every":[1, "hour"]}'
We’ll use the Hacker News API endpoints to fetch the top page stories. Here’s what that function might look like:
constget=async(url)=>fetch(url).then((r)=> r.json());constfetchStories=async({ pages =1, min =100, exclude =[]}={})=>{const storyIds =awaitget(`https://hacker-news.firebaseio.com/v0/topstories.json`);returnPromise.all( storyIds// Filter out posts we've already seen.filter((id)=>!exclude.includes(id))// Limit to the first page (30 posts per page).slice(0, pages *30).map(async(id)=>{const story =awaitget(`https://hacker-news.firebaseio.com/v0/item/${id}.json`);return story;}));};
It’s basically an N+1 query, but no big deal. If we limit the results to the first page (i.e. the first 30 posts), the request is generally fast enough.
{"stories":[{"title":"Get a cable modem, go to jail (1999)","url":"http://telecom.csail.mit.edu/judy-sammel.html","score":447},{"title":"Writing a C compiler in 500 lines of Python","url":"https://vgel.me/posts/c500/","score":432},{"title":"Emacs Bedrock: A minimal Emacs starter kit","url":"https://sr.ht/~ashton314/emacs-bedrock/","score":214}// ...]}
Now, the simplest thing we can do is just send an alert to Discord whenever we find posts that meet the minimum score requirement set in the request body.
exportdefaultasyncfunctionhandler(req, res){const{ min =100, pages =1}= req.body;const stories =awaitfetchStories({ pages });// After we fetch the stories, we filter out the ones that// don't have the minimum score specified in the request bodyconst results = stories.filter((s)=> s.score >= min);if(results.length >0){// Send notification to Discord using the helpers defined aboveawaitnotify(format(results));}return res.status(200).json({ results });}
This is fine if we run the script every hour or two, and don’t mind seeing repeated links. But what if we want to avoid sending duplicates?
There are two ways we can keep track of which posts have been sent out already.
When you create a job on a schedule, the scheduler will pass in some metadata into the body of each request. Included in that metadata is the response from the previous run, which can be seen at req.body.$previous.
Let’s modify the API handler to take advantage of the $previous data:
exportdefaultasyncfunctionhandler(req, res){const{ min =100, pages =1}= req.body;// Add this line! 👇const exclude = req.body.$previous?.sent ||[];// And then pass in the `exclude` parameter to exclude// previously sent posts from the results of `fetchStories`const stories =awaitfetchStories({ pages, exclude });const results = stories.filter((s)=> s.score >= min);if(results.length >0){awaitnotify(format(results));}return res.status(200).json({// And modify the response to return all the IDs of sent posts,// including the previously sent posts (from `exclude`) 👇 sent:[...exclude,...results.map((s)=> s.id)],});}
If we want, we can also clean this up a bit by taking advantage of dynamic values in our job schedule configuration. When we set the request body for our job, we can write it like this:
{// Set the minimum score requirement to 200 points"min":200,// Pass in `req.body.$previous?.sent` as `req.body.exclude`"exclude":"$previous.sent"}
If we do this, we can change the line above from this:
const exclude = req.body.$previous?.sent ||[];
To this:
const exclude = req.body.exclude ||[];
Now, assuming you’ve deployed your API endpoint to https://yourdomain.com/api/hn, you can create your scheduled job by running the following script in your terminal with your BOOPER_API_KEY set, and the url modified to the appropriate domain:
# This assumes your API key is set in the current env# BOOPER_API_KEY=sk_...curl--location--request POST 'https://scheduler.booper.dev/api/jobs'\--header"Content-Type: application/json"\--header"Authorization: Bearer $BOOPER_API_KEY"\--data-raw '{"method":"post","url":"https://yourdomain.com/api/hn","body":{"min":200, "exclude":"$previous.sent"},"repeat_every":[1, "hour"]}'
Also included in the metadata is the schedule’s state, which can be accessed at req.body.$state and set or updated in the response.
Let’s modify the API handler to take advantage of $state:
exportdefaultasyncfunctionhandler(req, res){const{ min =100, pages =1}= req.body;// Add this line! 👇const exclude = req.body.$state?.sent ||[];// And then pass in the `exclude` parameter to exclude// previously sent posts from the results of `fetchStories`const stories =awaitfetchStories({ pages, exclude });const results = stories.filter((s)=> s.score >= min);if(results.length >0){awaitnotify(format(results));}return res.status(200).json({ $set:{ sent:[...exclude,...results.map((s)=> s.id)],},});}
This may look similar to what we did above when using the $previous metadata, but there is one key difference: while $previous data is ephemeral, the schedule’s $state is stored in the database. This means that if a schedule is paused and then restarted again, the $previous data will be lost, but $state will be persisted.
Once again, we can clean this up a bit by taking advantage of dynamic values in our job schedule configuration. When we set the request body for our job, we can write it like this:
{// Set the minimum score requirement to 200 points"min":200,// Pass in `req.body.$state?.sent` as `req.body.exclude`"exclude":"$state.sent"}
If we do this, we can change the line above from this:
const exclude = req.body.$state?.sent ||[];
To this:
const exclude = req.body.exclude ||[];
Now, assuming you’ve deployed your API endpoint to https://yourdomain.com/api/hn, you can create your scheduled job by running the following script in your terminal with your BOOPER_API_KEY set, and the url modified to the appropriate domain:
# This assumes your API key is set in the current env# BOOPER_API_KEY=sk_...curl--location--request POST 'https://scheduler.booper.dev/api/jobs'\--header"Content-Type: application/json"\--header"Authorization: Bearer $BOOPER_API_KEY"\--data-raw '{"method":"post","url":"https://yourdomain.com/api/hn","body":{"min":200, "exclude":"$state.sent"},"repeat_every":[1, "hour"]}'