Get alerted of top HN posts every hour
Let’s set up a script that alerts us every hour of new posts on the front page of Hacker News that have exceeded a certain point threshold.
Here’s how we’ll tackle it:
Just show me the code
See the finished code on Replit.
You can even use the Replit URL to run your job, as long as the Repl is running. Make sure to replace YOUR_DISCORD_WEBHOOK_URL
with your Discord webhook URL, and then you can run the following in your terminal:
We’ll use the Hacker News API endpoints to fetch the top page stories. Here’s what that function might look like:
It’s basically an N+1 query, but no big deal. If we limit the results to the first page (i.e. the first 30 posts), the request is generally fast enough.
View example output of HN posts
Here’s an example of what the output looks like:
First, let’s define two helper functions:
format
: formats story data into Discord markdownnotify
: sends an alert to Discord via webhook URLNow, the simplest thing we can do is just send an alert to Discord whenever we find posts that meet the minimum score requirement set in the request body.
Here we handle that in a NextJS API endpoint:
This is fine if we run the script every hour or two, and don’t mind seeing repeated links. But what if we want to avoid sending duplicates?
There are two ways we can keep track of which posts have been sent out already.
When you create a job on a schedule, the scheduler will pass in some metadata into the body of each request. Included in that metadata is the response from the previous run, which can be seen at req.body.$previous
.
Example using the `$previous` metadata
Let’s modify the API handler to take advantage of the $previous
data:
If we want, we can also clean this up a bit by taking advantage of dynamic values in our job schedule configuration. When we set the request body
for our job, we can write it like this:
If we do this, we can change the line above from this:
To this:
Now, assuming you’ve deployed your API endpoint to https://yourdomain.com/api/hn
, you can create your scheduled job by running the following script in your terminal with your BOOPER_API_KEY
set, and the url
modified to the appropriate domain:
Also included in the metadata is the schedule’s state
, which can be accessed at req.body.$state
and set or updated in the response.
Example using the schedule `$state`
Let’s modify the API handler to take advantage of $state
:
This may look similar to what we did above when using the $previous
metadata, but there is one key difference: while $previous
data is ephemeral, the schedule’s $state
is stored in the database. This means that if a schedule is paused and then restarted again, the $previous
data will be lost, but $state
will be persisted.
Once again, we can clean this up a bit by taking advantage of dynamic values in our job schedule configuration. When we set the request body
for our job, we can write it like this:
If we do this, we can change the line above from this:
To this:
Now, assuming you’ve deployed your API endpoint to https://yourdomain.com/api/hn
, you can create your scheduled job by running the following script in your terminal with your BOOPER_API_KEY
set, and the url
modified to the appropriate domain:
Get alerted of top HN posts every hour
Let’s set up a script that alerts us every hour of new posts on the front page of Hacker News that have exceeded a certain point threshold.
Here’s how we’ll tackle it:
Just show me the code
See the finished code on Replit.
You can even use the Replit URL to run your job, as long as the Repl is running. Make sure to replace YOUR_DISCORD_WEBHOOK_URL
with your Discord webhook URL, and then you can run the following in your terminal:
We’ll use the Hacker News API endpoints to fetch the top page stories. Here’s what that function might look like:
It’s basically an N+1 query, but no big deal. If we limit the results to the first page (i.e. the first 30 posts), the request is generally fast enough.
View example output of HN posts
Here’s an example of what the output looks like:
First, let’s define two helper functions:
format
: formats story data into Discord markdownnotify
: sends an alert to Discord via webhook URLNow, the simplest thing we can do is just send an alert to Discord whenever we find posts that meet the minimum score requirement set in the request body.
Here we handle that in a NextJS API endpoint:
This is fine if we run the script every hour or two, and don’t mind seeing repeated links. But what if we want to avoid sending duplicates?
There are two ways we can keep track of which posts have been sent out already.
When you create a job on a schedule, the scheduler will pass in some metadata into the body of each request. Included in that metadata is the response from the previous run, which can be seen at req.body.$previous
.
Example using the `$previous` metadata
Let’s modify the API handler to take advantage of the $previous
data:
If we want, we can also clean this up a bit by taking advantage of dynamic values in our job schedule configuration. When we set the request body
for our job, we can write it like this:
If we do this, we can change the line above from this:
To this:
Now, assuming you’ve deployed your API endpoint to https://yourdomain.com/api/hn
, you can create your scheduled job by running the following script in your terminal with your BOOPER_API_KEY
set, and the url
modified to the appropriate domain:
Also included in the metadata is the schedule’s state
, which can be accessed at req.body.$state
and set or updated in the response.
Example using the schedule `$state`
Let’s modify the API handler to take advantage of $state
:
This may look similar to what we did above when using the $previous
metadata, but there is one key difference: while $previous
data is ephemeral, the schedule’s $state
is stored in the database. This means that if a schedule is paused and then restarted again, the $previous
data will be lost, but $state
will be persisted.
Once again, we can clean this up a bit by taking advantage of dynamic values in our job schedule configuration. When we set the request body
for our job, we can write it like this:
If we do this, we can change the line above from this:
To this:
Now, assuming you’ve deployed your API endpoint to https://yourdomain.com/api/hn
, you can create your scheduled job by running the following script in your terminal with your BOOPER_API_KEY
set, and the url
modified to the appropriate domain: