Tailwind 2.x recommends using PostCSS for its preprocessor. However, if you still want to use Sass the documentation isn’t the clearest on how to set it up.

There is a page but it doesn’t seem to give you the step by step breakdown needed - here document whats needed to use Sass with Tailwind 2.x and Laravel-Mix.

First rename the resources/css/app.css file to use the scss extension:

mv resources/css/app.css resources/css/app.scss

Next remove the default postCss config within the webpack.mix.js config file:

// snip...
mix.postCss('resources/css/app.css', 'public/css', [
    require('postcss-import'),
    require('tailwindcss'),
    require('autoprefixer')
])

Now add the tailwind module and use the sass plugin and configure postCss to use the tailwind.config.js file:

const tailwindcss = require('tailwindcss')
// snip...
mix.sass('resources/css/app.scss', 'public/css')
    .options({ postCss: [ tailwindcss('./tailwind.config.js')]})

Thats it!

When you run npm run dev the needed sass and other js dependencies will be detected as missing and automatically installed.

Continue reading

Adding A CI/CD process to my work flow is one of the really quick wins I do on every serious project I work on.

Whilst most of my personal work is hosted on Gitlab, a recent project I was working had its code in Bitbucket. This was my first time working with Bitbucket, so I wanted to document how I built assets, linted and ran tests using its pipelines.

What will these pipelines do?

By the end of this article you should have a working bitbucket-pipelines.yml file for your Laravel project which will do the following:

  • Use composer to install your projects dependencies
  • Run php-cs-fixer to enforce a code style
  • Run larastan to run static analysis against the code base
  • Run php-cs-fixer and larastan in parallel
  • Run phpunit to run our projects test suite
  • For a production build yarn run production
  • Allow us to manually trigger a deployment to production using Laravel Deployer.
Continue reading

Today I was working on an API written in Laravel for a React Native app with another developer.

He was trying to make requests to the Laravel backend and told me he kept receiving a response with a http status code of 302. 3XX http status codes are redirection status codes.

It turned out that he had not set an accept header on the requests to the server that the app was making.

By default if you dont set a requests accept headers they default to Accept: */*. With those set laravel responds with a Content-Type of text/html.

For the purpose of this project every api response needs to return JSON.

We can achieve this really easily by using middleware to overwrite the Accept headers on the incoming request and setting them to application/json.

Continue reading

At our company setting up gitlab ci configuration is one of the jobs I end up doing by default.

This weekend I wrote a package to help speed that process up by generating a .gitlab-ci.yml file as well as installing some of the packages and configuration files to make the following possible:

The package currently provides a single artisan command to do all of the above after answering a few simple questions.

Check the repo out here:

https://github.com/talvbansal/laravel-gitlab-ci-config-generator

Continue reading

Recently I wrote about my current Gitlab CI process, when it came to the deployment part of the process I showed how I was handling it using a tool called Laravel deployer but I didn’t breakdown what laravel deployer was doing and how I had it configured.

The Laravel deployer docs are pretty good however I found a couple of server config issues that I always find myself referring back to when setting up auto deployment. Mostly to automatically restart Laravel Horizon and restarting Php-fpm without needing sudo privileges.

Lets imagine I was going to a set up Gitlab CI / CD for a fictional project hosted over on the fictional domain of deployer.talvbansal.com with a real repository here that is hosted on somewhere like Linode, Digital Ocean or even AWS.

Continue reading

Introduction

My most read articles on this blog are about Gitlab CI/CD with PHP. They cover a basic linting, testing and crude deploying process.

Today I want to look at my current CI/CD process for my Laravel projects in more depth. Currently the pipelines of my projects might vary slightly but is very similar to this:

Gitlab Pipeline

So there are 5 main stages in the process:

  • Preparation - The pulling down of dependencies and storing them in an artifact
  • Syntax - Check code syntax
  • Testing - Run unit tests
  • Building - Build assets
  • Deployment - Deploying to an appropriate server

The stages are processed in order with each stage containing one or many tasks. Should one task fails in a stage then the whole pipeline stops and is marked as failed.

To run the pipelines I make use of gitlabs free tier which gives you access to 2000 shared minutes per month as well as a runner on a server I have. More about setting that up can be found here.

Continue reading

I recently watched the following great talk on hacking laravel apps.

Towards the end of the talk Antti shows how it is possible to potentially gain root access to a server if your scheduler is running as root too.

As soon as I saw it I know I had a couple of apps where this vulnerability could have been exploited and so went to patch them straight away.

Whilst I knew what needed to be done I wasn’t 100% on how exactly I’d add an entry into another user’s crontab that wasn’t my own or root.

Turns out it was quite simple, acting as root use the -u argument to specify the target user.

1
sudo crontab -e -u www-data

In the above example the crontab for the user www-data would be opened. Since my php-fpm instance is run by www-data and therefore has access to all the application code already this made sense to me.

Hopefully I’ll never make this mistake again. If you haven’t already seen Antti’s talk above I’d highly recommend doing so asap!

Continue reading

Introduction

Laravel apps read sensitive information from their .env file.

Recently I found out that Laravel Mix can pass values from the same .env file to the js portion of your app as long as they are prefixed with MIX_

I use Gitlab ci pipelines to build production assets so that I dont need that additional tooling on the production servers the main one being:

  • yarn run production

This is preceded with cp .env.example .env meaning when the build commands are being run, they are going to use values from the .example.env file.

If your project doesn’t make use of anything from the .env file then this is totally fine, however in scenarios where you do, since production applications will almost certainly have different .env values to those in the .example.env file (Never commit credentials to source control!) the resulting file will have been built with the wrong credentials.

In this article I’m going to show how you can use gitlab CI to build those assets with updated environmental variables so that they function as expected when deployed to your production servers.

Continue reading

Introduction

Laravel’s Form Requests are a great way of removing validation logic from your controllers.

There are times were it can be useful to update or change the request data before it is passed to the validator for example formatting postcodes, removing invalid characters or providing default values to data.

The official documentation shows how we can perform additional logic after the rule sets have been run but not before hand.

Digging through the Form Request api there is mention of a method called prepareForValidation which is an empty method that is called before the actual validation rules are run as can be seen in implementation:

the prepare for validation method

So given that the method is empty how do we use it and go about updating the form request data?

Continue reading

One of the projects I’ve been working recently has involved writing a system to communicate with a Clients pre-existing Legacy system. The system doesn’t have the ability to “talk out” but can be queried using a SOAP service.

The legacy system is:

  • Slow to interact with
  • Prone to crashing regularly

Here’s how we dealt with those 2 issues:

Since we don’t want to impact our end users experience the project has been set up to make use of Laravel queuing system with Redis and those queues are configured with Laravel Horizon.

When these jobs failed for whatever reason - including the clients system going down we wanted to be notified so that someone could look into what was happening.

The team at spatie.be created a great package to do that handle notifying us when a failed job occurred.

However there are multiple jobs being run concurrently that run as frequently as every minute to check for things on their server. Which mean’t that once the legacy system went down we would receive multiple notifications until the system came back up, (In some cases this has been hours - not ideal).

After a while being flooded with these sorts of notifications makes them become an annoyance rather than useful and you begin to start ignoring them.

So I set out to write something that gave me everything the Spatie package did but also allowed me to throttle how frequently notification of a given type would be sent to us.

Laravel throttled failed jobs!

Continue reading
Author's picture

Talv Bansal

Full Stack Developer, Part Time Photographer


Head of Software Engineering


Remote