Showcase your Gatsby Site

Have you created a Gatsby site that you’re proud of? It could be for a client or just your personal site. Either way, you can submit it to the Gatsby showcase.
It’s pretty simple to do. Just follow the instructions here. In a nutshell, you fork Gatsby on GitHub and add your site details to their showcase configuration file. Once the PR is approved and merged, the next time they deploy the https://gatsbyjs.org site, your site will be in the list!

A couple of bonuses to doing this:

it’s free publicity for you
it’s a contribution to open source
become part of maintainers team for the Gatsby organization on GitHub
the Gatsby team gives you free swag for the PR. Who doesn’t want Gatsby socks?!

I’m under blogs in the showcase, but you can also go directly to my site profile.
And for those interested, here’s my site’s source code full of TypeScript, React and Netlify CMS goodness.

nickytonline / www.iamdeveloper.com

Source code for my web site iamdeveloper.com

iamdeveloper.com

Hey there, I’m Nick and this is my site’s source code. This site started off as a clone of the Netlify CMS Gatsby Starter (check it out!). Since then, I’ve tweaked it a lot and converted the codebase to TypeScript.
Feel free to peruse the code and/or fork it. 😉
Thanks to all the wonderful projects that made it possible to build this blog.

Babel
React
Gatsby
TypeScript

Netlify & Netlify CMS

undraw.co
simple-icons
and OSS in general.

To get up and running:

clone the repository by running git clone git@github.com:nickytonline/www.iamdeveloper.com.git or git clone https://github.com/nickytonline/www.iamdeveloper.com.git

run npm install

run npm run develop to get up and running with the Gatsby development server.
Since the project uses Babel and not TypeScript as the compiler, a separate process is required to run type checking. Open another terminal and run npm run type-check:watch

If you’re curious about why the Netlify CMS admin…

View on GitHub

Link: https://dev.to//nickytonline/showcase-your-gatsby-site-266

Q&A with Helen Anderson, BI Data Analyst and Technical Consultant

This interview in the Simple Analytical Analysts Assemble series is with New Zealand-based BI Data Analyst and Technical Consultant, Helen Anderson. I first ran into Helen while she was writing an excellent series on SQL here on Dev.to.
I’ve since found her to be one of the most helpful and approachable members of the site and a great proponent of helping encourage everyone in the community. I’m very pleased to say she’s agreed to share her story and data journey with us here.
Over to Helen…
1) Tell us a bit about yourself, how did you get into the data space and what does your data journey look like so far?
Hello! I’m Helen Anderson and I’m a Data Analyst and Technical Consultant on Xero’s Data Services team. I support Xero’s Analyst Community with code review, building datasets they need, maintaining the database they use for their work and providing guidance for junior analysts.
My data journey so far hasn’t been the most traditional one.
I didn’t study computer science or even IT at university. After graduating with a Business degree I landed in the world of Supply Chain Analysis. I really enjoyed solving the puzzle of how to get the right stuff to the right people at the right time. Putting together a plan based on the customer’s needs, the shipping timetables and in the case of my first job shipping apples from New Zealand around the world, what the growers estimated they would harvest. Even though we did everything using Excel it set me on the path to where I am now.
Since then I’ve worked for the Royal NZ Ballet working on revenue projections based on theatre seating configurations, Timex in London planning manufacturing of watches for Europe and Icebreaker balancing stock between locations around the world.
Making the move from working in Excel to coding in SQL happened when I joined Xero three years ago. I started in the Marketing team as an analyst pulling lists for email campaigns and doing post-campaign analysis. I was pretty late to the game when it came to using SQL, but was hooked. Now I am working in the BI team in a role that allows me to support those junior and not so technical analysts, who were ‘me’ three years ago while growing my technical skills working on projects to build data models a new database platform.
2) What’s a typical day look like for you in your current data role? (Which tools and languages do you use? Big team/small team/lone wolf? Office-based or remote?)
For the first time in my career, I’m on a big team of analysts, developers and engineers. There are almost 30 on the Data Services team. A big difference from being the lone analyst in most of my roles so far.
When I first joined the team the imposter syndrome hit me pretty hard.
When you are the lone analyst you do what you know because there’s no one but StackOverflow to ask. I’ve spun those feelings around and made a point of asking for help and learning from those around me with many years of experience. We have an incredible team culture, everyone is happy to help and we celebrate success with a monthly Awesome Award to celebrate everyone’s good work.
We have a team of analysts who work on ad-hoc data requests, a BI team that build models that are used in reporting, a Data Engineering team that maintain the data pipelines and platforms and MicroStrategy consultants who make sure the self-service visualisation platform is user-friendly.
Because my role is to support Analysts in teams across the business I get to do a little bit of everything. Some days I’m building a custom data set in Redshift to support analysts work, showing them how to use Microstrategy to present their reports or troubleshooting the Airflow jobs that move data to the dedicated Aurora database we maintain to support their work.
3) You’ve built up a large following through your blogging. How important do you think it is for data professionals, at all stages of their career, to share publicly what they are doing and learning?
I began blogging on my self-hosted WordPress site the last time I was on the job hunt. To no surprise, it wasn’t exactly flooded with new visitors.
After seeing a colleague blogging on Medium, I weighed up my options and decided to take the plunge on Dev.to. I’m really glad I did. I’m not a traditional web developer or software engineer like a lot of the community, but I still have something to offer.
My first few posts were ‘listicles’, easy to read, but felt a little too Buzzfeedy. I reassessed the tone and found a more conversational voice. The same way I’d talk to a colleague about a technical subject, but without recreating technical documentation.
Blogging is beneficial in so many ways.
You’re reinforcing your learning or understanding of your chosen technology or tool. So even if you only get a handful of views, you have still done something worthwhile.
This is all about your unique point of view. Which doesn’t mean you have to know everything on a topic. That’s why you are writing a blog post, and not rewriting technical documentation.
Even if you are beginner explaining how you are learning how to use a new tool, your perspective is important. You never know who may stumble across your post and find your explanation of a topic helps it all click into place for them.
4) Where do you see your own data career going next? Building on your technical skills as an Individual Contributor or moving into a more management-based role?
I’m incredibly lucky to be on a team that encourages both. Right now I’m building up my knowledge of the quirks of PostgreSQL, as that’s the flavour of Aurora database the analysts I support have moved to. I’m working on gaining more of an understanding of the data pipelines that load the data in and the AWS services we use to build the infrastructure for my own interest and to get to know more about what our DevOps and Data Engineers do.
I’m looking forward to supporting the junior analysts more as the analyst community grows and will be giving public speaking a go with my first tech talk really soon. Even though I’m not making the career change to Management I’m still able to support and teach those around me.
5) If you had a list of “best-kept-secrets” (websites, books, coaches), which would you recommend?
I put together a list of resources I think are great for Junior Analysts recently that cover not only the technical side with SQL but the human side too – requirements gathering, visualisation and communication.
My favourites blogs at the moment are:
Data36 – https://data36.com/ for tutorials and hands-on learning.
Simple Analytical – https://simpleanalytical.com/ for commentary on being in the data world and the ups and downs of being an analyst.
Mode – https://mode.com/blog/ – content for analysts by analysts.
Soft Skills Engineering – https://softskills.audio/ – my favourite podcast advice show about non-technical topics.
6) What is the number one piece of advice you give to aspiring data scientists?
Being a great data analyst or scientist is more than just churning out SQL and knowing your way around the database. It’s important to learn how to listen to stakeholders and determine what it is they need from a report or dashboard. Put equal amounts of effort into learning communication skills, interpreting the story behind the numbers and presenting data in a way your end user finds the most digestible.
By honing these skills, as well as building models and your technical skills, you’ll go far.
7) Where can readers find you online?
You can find me blogging regularly on Dev.to, on Twitter, on LinkedIn or my own personal site.
If you liked this interview with Helen, please keep an eye on SimpleAnalytical.com for more installments in the Analysts Assemble series. If you work with data, in your job or spare time, and would like to join the series, please drop me an email: alan AT simpleanalytical DOT com

Link: https://dev.to//alanhylands/qa-with-helen-anderson-bi-data-analyst-and-technical-consultant-1mjk

10 Things You Can Do If You Have A Bad Deploy

Think about this. You just finished some updates to your website or project. Of course you’re excited because you fixed a lot of bugs and you got to show a few people how great it was when you ran it locally.
If you’re in a small company or you work for yourself, you might have to deploy those changes to the production server yourself. So you clean you your code, make all of your commits in Git, and deploy the changes that make you look like you can do anything.
Then it doesn’t work. At all. Now you have angry people telling you to fix it and you’re not sure what the problem is. Here’s a few things you can check really fast to help figure out the issue:

Check the connection strings

It’s nice when it’s something easy like this. Make sure that you’re connecting to the production database and not your development one. Depending on how you do your deployment, the connection string could be automatically updated to match the environment its in. That could mess up the string value and cause weird issues.

Check the CORS settings

If you have any calls to APIs that aren’t connected to your site, you definitely need to check your CORS. The cross origin resource sharing settings you have could block any incoming data you expect. So if you’re using services like Google Maps, Stripe, or SendGrid make sure you quadruple check your CORS settings.

Check the username and password for everything

Sometimes it’s the little things. Make sure you used the right username and password for everything your site uses. That includes the database, any API authentication, or internal authentication. It’s best to check these simple things just to make sure you do waste time looking at other issues that aren’t the problem.

Check the Publish procedure

Some people like to use build pipelines to run any last minute tests or checks and it’s not a bad idea if you can do it. A part you can include in a pipeline is automatically publishing the new files once they pass all the tests. Make sure that this publish procedure references the right folders and files on both the client and server sides.

Check the published files

This might not be possible all the time, especially if you chunk files. Although if you have the option to look at the files it’s a good way to find out if the right files were published. You can always check the date stamp, but sometimes it helps to see if the changes you made are actually there. Maybe you accidentally published the wrong branch of the site. It happens.

Check the appsettings.json

There’s a chance that a value in here was changed during local testing and wasn’t changed back before you deployed. The server root address will probably be different on your live site versus your local one. An IsEnabled value might need to be false instead of true for things to work on the live site. Just do a quick run through to make sure nothing looks unusual.

Check the config files

This is where a lot of the behind the screens rules are for your site. Is there a Rewrite action you’re missing or anything else? You could have some routing that isn’t quite right. Again, just go through everything and check the values. Also, if you have another project that you worked on similar to this, you can see what values it has.

Check the changes you made

Did you break it? It’s not the most unlikely scenario. We all do crazy hacks to get stuff working locally that we would never purposely deploy. Check your code to see if you removed anything like that because that’s most likely your problem. Even if it runs on your machine, just go through your changes and see if you did something odd.

Look for version differences

Is the front-end of your site still compatible with the back-end? Is the back-end still compatible with the server? Updates to libraries or entire frameworks can cause some nightmares because they take the longest to figure out. Save yourself a lot of time and swearing and check those version compatibilities.

Republish an older, working version the site

Until you can figure out why the site isn’t working with your latest changes, it’s ok to re-deploy an older, working version of the site. Anytime a site is down, the person who owns the site is losing money. You don’t want that because then you’ll lose money. Explain to your person that you’ll just have to go back to the older version until you find the issue with the new one. They understand. Sometimes…
Do you have any horror stories of a project you KNOW worked, but it just wouldn’t work when you deployed it? I definitely have a few.
I’ve been trying to figure out what I can write that’s super useful so I asked my subscribers in a survey. They said they want more tutorials on the different frameworks and that’s what I’m going to do. That includes React, Angular, Vue, Next, Express, and some of the others.
You should sign up for my emails so that you can stay up to date with which tutorials are coming out and have input on the next ones. Here’s the link for that. There’s also a free email class in it for you. 😉

Link: https://dev.to//flippedcoding/10-things-you-can-do-if-you-have-a-bad-deploy-h7g

Speed-up your internationalization calls up to 5-1000 times

Context

It all started two years ago. I was working on a new PWA for a big social network written from scratch that needed a i18n module to handle different languages. The module had to:

handle interpolation.
handle PLURAL and SELECT expressions.

be lightweight (it’s a PWA, must run with limited bandwidth).

run fast (some users had low-end devices).

And that’s where things got creepy, the only possible library was Google Closure MessageFormat. It was not so fast on low-end devices and weighing heavily on our bundle. So I decided to write my own with performance in mind.
Fast forward to today, the problem is still the same with i18n libraries, so I opened-source 💋Frenchkiss.js a 1kb i18n library 5 to 1000 times faster than others.
Stay with me for a journey on performances optimizations.

👉 Time to speed up your webapp for mobile devices!

🤷 How are i18n modules working?

Under the hood, it sucks, some i18n modules are re-processing the translation on each and every calls, resulting in poor performances.
Here is an example of what can happen inside the translate function (really simplified/naive version of Polyglot.js).
const applyParams = (text, params = {}) => {
// Apply plural if exists
const list = text.split(‘||||’);
const pluralIndex = getPluralIndex(params.count);
const output = list[pluralIndex] || list[0];

// Replace interpolation
return output.replace(/%\{\s*(\w+)\s*\}/g, ($0, $1) => params[$1] || ”);
}

applyParams(‘Hello %{name} !’, {
name: ‘John’
});
// => Hello John !

In short, on each translations call we split the text, calculate the plural index, create a RegExp and replace all occurrences by the specified given parameter if it exists and returns the result.
It’s not that big of a deal, but are you fine doing it multiple time on each render/filter/directive call ?

👉 It’s one of the first things we learn when building app in react, angular, vuejs or any other framework : avoid intensive operations inside render methods, filters and directives, it will kill your app !

Some i18n libraries are doing better !

Some others are optimizing things quite a bit, here comes Angular, VueJs-i18n, Google Closure for example.
How are they doing it ? Actually they parse the string only once and cache a list of opcodes to process them on the next calls.
If you aren’t familiar with opcodes, it’s basically a list of instructions to process, in this case just to build a translation. Here’s a possible example of opcodes generated from translations :
[{
“type": "text",
"value": "Hello "
}, {
"type": "variable",
"value": "name"
}, {
"type": "text",
"value": " !"
}]

And how we print the result :
const printOpcode = opcodes => opcodes.map(code => (
(code.type === ‘text’) ? code.value :
(code.type === ‘variable’) ? (params[code.value] || ”) :
(code.type === ‘select’) ? printOpCode( // recursive
params.data[params[code.value]] || params.data.other
) :
(code.type === ‘plural’) ? printOpCode( // recursive
params.list[getPluralIndex(params[code.value])] || params.list[0]
) :
” // TODO not supported ?
)).join(”);

With this type of algorithm, more time is allocated for the first call that generate the opcode but we store it and re-use it for faster performance in the next calls :

It doesn’t split the string.
It doesn’t do intensive regex operation.
It just read the opcode and merge the result together.

Well, that rocks ! But is it possible to go further ?

🤔 How can we speed up things ?

💋Frenchkiss.js is going one step further, it compiles the translation into a native function, this one is so light and pure that the Javascript can easily JIT compile it.

How does it work ?

Quite simple, you can actually build a function from a string doing the following :
const sum = new Function(‘a’, ‘b’, ‘return a + b’);

sum(5, 3);
// => 8

For further informations, take a look at Function Constructor (MDN).

The main logic is still to generate an opcode list but instead of using it to generate a translation we use it to generate an optimized function that will returns the translation without further process.
It’s actually possible because of the simple structure of interpolation and SELECT/PLUTAL expressions. It’s basically a returns with some ternary.
const opCodeToFunction = (opcodes) => {
const output = opcodes.map(code => (
(code.type === ‘text’) ? escapeText(code.value) :
(code.type === ‘variable’) ? `params[${code.value}]` :
(code.type === ‘select’) ? … :
(code.type === ‘plural’) ? … :
” // TODO Something wrong happened (invalid opcode)
));

// Fallback for empty string if no data;
const result = output.join(‘+’) || "";

// Generate the function
return new Function(
‘arg0’,
‘arg1’,
`
var params = arg0 || {};
return ${result};
`);
});

⚠️ Note: when building dynamic function, make sure to avoid XSS injection by escaping user input !
Without further ado, let’s see the generated functions (note: the real generated functions are a little more complex, but you will get the idea).

Interpolation generated function

// "Hello {name} !"
function generated (params = {}) {
return ‘Hello ‘ + (params.name || ”) + ‘ !’;
}

By default, we still fallback to empty string to avoid printing "undefined" as plain text.

Select expression generated function

// "Check my {pet, select, cat{evil cat} dog{good boy} other:D"
function generated (params = {}) {
return ‘Check my ‘ + (
(params.pet == ‘cat’) ? ‘evil cat’ :
(params.pet == ‘dog’) ? ‘good boy’ :
(params.pet || ”)
) + ‘ :D’;
}

We don’t use strict equality to keep supports for numbers.

Plural expression generated function

// "Here {N, plural, =0{nothing} few{few} other{some}} things !"
function generated (params = {}, plural) {
const safePlural = plural ? { N: plural(params.N) } :{};

return ‘Here ‘ + (
(params.N == ‘0’) ? ‘nothing’ :
(safePlural.N == ‘few’) ? ‘few’ :
‘some’
) + ‘ things !’;
}

We cache the plural category to avoid re-fetching it in case of multiple checks.

🚀 Conclusion

With generated functions we were able to execute code from 5 to 1000 time faster than others, avoiding doing RegExp, split, map operations in rendering critical path and also avoiding Garbage Collector pauses.

Last best news, it’s only 1kB GZIP size !

If you’re searching for a i18n javascript library to accelerate your PWA, or your SSR, you should probably give 💋Frenchkiss.js a try !

Link: https://dev.to//vince_tblt/speed-up-your-internationalization-calls-up-to-5-1000-times-1778