Go to the corresponding Hadoop version in the Spark distribution and find winutils.exe under /bin. Winutils.exe - a Hadoop binary for Windows - from Steve Loughran’s GitHub repo. You can get both by installing the Python 3.x version of Anaconda distribution. I’ve tested this guide on a dozen Windows 7 and 10 PCs in different languages. In this post, I will show you how to install and run PySpark locally in Jupyter Notebook on Windows.
When I write PySpark code, I use Jupyter notebook to test my code before submitting a job on the cluster.