Hello, hello! Recently I posted a two part article on creating a Guest wireless network using OpenWRT, VLANs, and Firewall rules. Now we left things kinda open from a security standpoint. WE gave our Guest users full Internet access with no restrictions on sites, bandwidth usage, or ports!! Yikes! For this article I am going to walk you through the steps to close those gaps. We are going to first configure a Web Proxy server that will proxy outbound Internet connections. This allows us to check where and what are Guests are trying to get their hands on. Good and bad. We will also force Guests to connect to this Web Proxy server transparently. What I mean by that is the Guests will not be required to do anything on their side to connect, our firewall will take care of that. And lastly, I want only allow limited bandwidth of HTTP traffic. You will see later on how we can accomplish this. I’ve expanded upon this article of mine that uses squid proxy to filter Ads.
Phew, let’s get started.
Installing Squid Proxy
-
Install dependencies
The easiest way to install dependencies on Ubuntu or another Debian based Linux server is to use the apt-get build-dep and apt-get source commands. We are installing Squid from source, because the default Squid package in the Ubuntu repositories doesn’t have configuration items we need to make our project work.
apt-get install build-essential fakeroot devscripts gawk gcc-multilib dpatch apt-get build-dep squid3 apt-get build-dep openssl apt-get source squid3
NOTICE: We just installed the essential build tools, along with any Squid dependencies, and the source files.
You should now have a squid3-3.1.19 folder.
-
Modify the build script
We need to modify the build script. This will make it so when we go to configure the source files it will include SSL support.
vi squid3-3.1.19/debian/rules
Add –enable-ssl under the DEB_CONFIGURE_EXTRA_FLAGS section.
... DEB_CONFIGURE_EXTRA_FLAGS := --datadir=/usr/share/squid3 --sysconfdir=/etc/squid3 --mandir=/usr/share/man --with-cppunit-basedir=/usr --enable-inline --enable-ssl ...
Don’t forget to save!
-
Configure, Make, Make Install
cd squid3-3.1.19/ debuild -us -uc -b
NOTICE: Here we use debuild that will automatically Configure, Make, and create an installable DEB package.
After this has been completed a DEB package will appear in the parent directory. Mine was called squid3_3.1.19-1ubuntu3.12.04.2_amd64.deb, squid3-common_3.1.19-1ubuntu3.12.04.2_all.deb
and squid3-dbg_3.1.19-1ubuntu3.12.04.2_amd64.debInstall them:
dpkg -i squid3_3.1.19-1ubuntu3.12.04.2_amd64.deb squid3-common_3.1.19-1ubuntu3.12.04.2_all.deb squid3-dbg_3.1.19-1ubuntu3.12.04.2_amd64.deb
-
Verify Squid Installation
For this step we just need to make sure that Squid properly installed itself and has SSL support.
squid3 -v
Look for the version number 3.1.19
squid3 -v |grep ssl
Look for the –enable-ssl item.
Configuring Squid Proxy
-
Copy the Squid3.conf file
cp /etc/squid3/squid3.conf /etc/squid3/squid3.conf.bak
-
Preparing Squid
We need to prepare the cache directory for Squid.
/home/usermkdir squidcache chown proxy. squidcache
This will make a new directory in our home folder called squidcache with the user proxy ownership.
-
Initializing Squid Cache
Ensure squid is not running first!
service squid3 stop
Initialize cache…
quid3 -z
-
Edit the squid3.conf file
NOTICE: I highly recommend taking a look at Squid3 Documentation, here.
vi /etc/squid3/squid3.conf #................................. #Access Lists acl manager proto cache_object acl localhost src 127.0.0.1/32 ::1 acl home_network src 192.168.0.0/24 acl guest_network src 192.168.1.0/24 #Ports allowed through Squid acl Safe_ports port 80 #http acl Safe_ports port 443 #https acl SSL_ports port 443 acl SSL method CONNECT acl CONNECT method CONNECT #allow/deny http_access allow localhost http_access allow home_network http_access allow guest_network http_access deny !Safe_ports http_access deny CONNECT !SSL_ports http_access deny all #proxy ports http_port {proxy_server_IP}:3128 http_port {proxy_server_IP}:8080 intercept #caching directory cache_dir ufs /home/user/squidcache/ 2048 16 128 cache_mem 1024 MB #refresh patterns for caching static files refresh_pattern ^ftp: 1440 20% 10080 refresh_pattern ^gopher: 1440 0% 1440 refresh_pattern -i .(gif|png|jpg|jpeg|ico)$ 10080 90% 43200 override-expire ignore-no-cache ignore-no-store ignore-private refresh_pattern -i .(iso|avi|wav|mp3|mp4|mpeg|swf|flv|x-flv)$ 43200 90% 432000 override-expire ignore-no-cache ignore-no-store ignore-private refresh_pattern -i .(deb|rpm|exe|zip|tar|tgz|ram|rar|bin|ppt|doc|tiff)$ 10080 90% 43200 override-expire ignore-no-cache ignore-no-store ignore-private refresh_pattern -i .index.(html|htm)$ 0 40% 10080 refresh_pattern -i .(html|htm|css|js)$ 1440 40% 40320 refresh_pattern . 0 40% 40320 #nameservers dns_nameservers {your DNS server IP}
Let’s step through this:
- acl = this tells squid which IP address and/or hosts to assign a certain Access List. For example home_network is any IP sourcing from 192.168.0.0/24 network.
- Safe_ports = tells squid which ports are allowed through the proxy, we have defined only 80 and 443.
- SSL_ports = tells squid which port is allowed when making an SSL connection
- http_access = defines which Access Lists (acl) are allowed to connect to the proxy.
- http_port = binds an IP and port of the Proxy server to listen for requests. We have two because one will be used for the Transparent Proxy, the other for browsers who explicit configure their browsers to connect to the proxy server.
- intercept = is required for transparency to work.
- cache_dir = defines where squid should store cached static files, how much space should it consume, and for how long. ufs is the type of storage system, /home/user/squidcache/ is the directory to use, 2048 defines 2048MB of capacity, 16 defines number of first-level subdirectories, and 128 defines second-level subdirectory. For more info, see here.
- cache_mem = defines how much memory should be allocated for Squid caching.
- <strong_refresh_patter = used to assign expirations, storage, retention, etc of static files. More info, here.
- dns_nameserver = defines the nameserver for squid to use, rather than those in the /etc/resolve.conf file. Recommend these be your external or DMZ DNS servers.
-
Testing
Assuming that your proxy server has path out to the internet, configure your browser to use a proxy server and point it at your Squid server. Use the instance running on port 3128
Success!
Firewall rules and Redirects
-
Enable Proxy server Internet Access
If not already done, proceed in giving the Proxy Server internet access via our Linux Router.
On the Linux Router:
iptables -t nat -I POSTROUTING -s {yourProxyIP}/32 -m multiport --dports 80,443 -j MASQUERADE
NOTICE: This assumes that your router is connected with a public interface, or an interfcae that will be route to the internet.
-
Quick test
From the Proxy server
telnet cnn.com 80 ... GET /
This should return some html.
-
Transparent redirects
On my Linux Router I have 2 interfaces. eth0 = Internet/Public Interface and eth1 = Trunk containing both 192.168.0.0/24 and 192.168.1.0/24 networks.
We need to filter traffic coming from these networks who’s destations are outbound to the internet. The easiest way to filter is based on destination port and source IP/Network.
On our Linux Router:
iptables -t nat -A PREROUTING -s 192.168.0.0/24 ! -d {squid-server-IP}/32 -p tcp -m tcp --dport 80 -j DNAT --to-destination {squid-server-IP}:8080 iptables -t nat -A PREROUTING -s 192.168.1.0/24 ! -d {squid-server-IP}/32 -p tcp -m tcp --dport 80 -j DNAT --to-destination {squid-server-IP}:8080 iptables -t nat -A PREROUTING -s 192.168.0.0/24 -! -d {squid-server-IP}/32 -p tcp -m tcp --dport 443 -j DNAT --to-destination {squid-proxy-IP}:8443 iptables -t nat -A PREROUTING -s 192.168.1.0/24 -! -d {squid-server-IP}/32 -p tcp -m tcp --dport 443 -j DNAT --to-destination {squid-proxy-IP}:8443
We must also add FORWARD rules:
iptables -A FORWARD -s 192.168.0.0/24 -d {proxy-server-ip}/32 -m multiport --dports 80,443 -j ACCEPT iptables -A FORWARD -s 192.168.1.0/24 -d {proxy-server-ip}/32 -m multiport --dports 80,443 -j ACCEPT
-
Testing
With the previous rules in place, any client on either the 192.168.0.0/24 or 192.168.1.0/24 networks should be transparently redirected to the proxy server. To test, reconfigure any proxy settings you may have in your browser and then try to connect to an internet site directly over http:// then over https://
Throttling and Filtering Traffic
It is imperative that you verify the functionality of the previous steps before continuing. In this section we are going to limit the bandwidth consumed by only our Guest Network clients, and also do some basic filtering of requests, such as blocking known malicious sites. We will use the delay_pool feature of Squid to perform this throttling and SquidGuard to perform filtering.
-
Adding a Delay Pool
A delay pool is a feature of Squid that will allow you to delay outbound requests from users based on conditions. For our purposes I wanted to limit Guest Network users to a flat rate to 100Kb/s. This would ensure that any Guest Network user would not be able to completely saturate my totally bandwidth from my ISP.
Let’s edit our squid.conf file:
vi /etc/squid3/squid.conf
Add the following lines:
#delay pools delay_pools 1 # how many delay pools will be defined delay_class 1 2 delay_access 1 allow guestNet delay_parameters 1 1048576/1045876 102400/102400 delay_access 1 deny all
Let’s walk through this…
- delay_pools 1 — Denotes how many delay pools we will define in the squid.conf
- delay_class 1 2 — This matches a delay pool class to a delay pool. The 1 is our delay pool number to match, and the number 2 is the type of delay class. See here for more info on delay classes.
- delay_access 1 — This is a standard access list. It defines which ACL from the top of our squid.conf will be associated with this delay pool. 1 signifies the delay_pool to associate the ACL with.
- delay_parameters 1 — Here is were we define the parameters for the delay class, specifically reducing bandwidth consumption to a flat 100Kb/s. The units are in bytes. The first part (1048576/1045876 1mb) denotes the max bandwidth allocated to this delay pool. The second part(102400/102400 or 100kb) is max bandwidth for each client within the ACL. This will help prevent one user form hogging all the bandwith from the rest of our users.
- delay_access 1 — The last part here says to deny all other ACLs to this delay pool 1
NOTICE: Don’t forget to restart Squid!
-
SquidGuard Filtering
apt-get install squidGuard -y
Once squidGuard is installed we need to tell Squid to use it. Once again, edit the squid.conf file:
<vi /etc/squid3/squid.conf
Add the following lines:
#rewrite program squidGuard url_rewrite_program /usr/bin/squidGuard -c /etc/squid/squidGuard.conf url_rewrite_children 20 #threads url_rewrite_concurrency 0 #jobs per threads
- url_rewrite_program — Defines the rewrite program we will use, in this case squidGuard with the -c means to use this squidGuard.conf file.
- url_rewrite_children 20 — Defines how many child processes or threads to open. This varies on how many users you have as well as the restriction of the proxy server itself.
- url_rewrite_concurrency –This tells how many squidGuard jobs can run per thread. Be careful with this as it will increase by a factore of the previous paramenter
Adding blocklists:
Create a folder were you will keep the blocked information files.mkdir ~/blocklists/ vi blocked-domains
...ommitted... {bad domain name} ...omitted...
…….
vi blocked-url ...omitted... {bad ip site} {bad url} ...omitted..
Then…
chown proxy. *
NOTICE: We just created two block files. One containing domain names, such as yahoo.com, facebook.com. The other containing URLs, such as 1.2.3.4, or 5.6.7.8/badstuff. Then we had to change the ownership to the proxy user so Squid and SquidGuard can read it.
Next:
You may have noticed the /etc/squid/squidGuard.conf from the above step. Lets create/edit that file with squidGuard specific options.vi /etc/squid/squidGuard.conf ................................. dbhome /home/usr/blocklists/ #define src src guests { ip 192.168.1.0/24 } #define category 'deny' dest badsites { domainlist domains urllist urls expressionlist expressions } acl { guests { #allow all except badsites pass !badsites all #redirect redirect http://{webserver}/deny.html }
Lastly:
Initialize the block listssquidguard -C all
NOTICE: You will have to run “squidguard -C all” each time you modify the files. This will update the .db files squidguard creates.
Further notes: The biggest issue with squidguard is it is very picky bout the blocklist file. Each item should be on a new line without leading or trailing spaces. And make sure both the blocklist and the blocklist.db files are readable by Squid and SquidGuard. Also, I believe there is an issue with the current SquidGuard build while trying to filter based on source IP with Transparent setups. It seems, each request being handed to squidGuard from Squid would go to the default option in the squidGuard.conf file.
Optional:Adding SSL Interception/Inspection Support
This next section allows your Squid proxy server to intercept SSL connection made from your clients. Warning! Doing so will most likely look like a man-in-the-middle attack. Clients will be connecting to your proxy server when trying to go to SSL protected sites, thus violating the SSL transaction. For example, a client opens up a connection to https://mail.google.com. This connection will be intercepted by the proxy server, which does not contain googles private SSL key. A untrusted mismatch will occur. I would like to also note that you should consider the behaviour you are trying to achieve with having SSL connections proxy’d through Squid. The nature of SSL does not allow us to easily perform Proxy featu3res, such as caching, content filter, content manipulation, etc. Therefore, if you are setting up SSL pass-through with squid, then you are effectively doing the same thing that a router would. In conclusion, the only reasons I can think of for enabling SSL Interception would be for auditing and monitoring purposes. For example, you are willing to allow the use of 3rd party Web Email (GMAIL, YAHOO) to your employees, but you would require that the users are monitored to prevent data leakage, etc.
For my purposes of a guest network, this was okay behaviour. In an enterprise, you would need additional steps to install trust between you and your users.
Creating our Self-Signed SSL Cert: (See here)
cd /etc/squid3/ mkdir certs openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout squid.pem -out squid.pem
NOTICE: You will be prompted to answer some information about the certificate, such as geographic location, common name. Fill in as you see fit.
Enabling SSL-Bump:
vi /etc/squid3/squid.conf #Add the following... http_port {squidip}:8443 ssl-bump cert=/etc/squid3/certs/squid.pem key=/etc/squid3/certs/squid.pem
NOTICE: Users will get a certificate miss match with the SSL enabled sites they try to visit. They will have to add exceptions to trust the self-signed cert from above for each site. Again this may not be desired behavior. Consult with you PKI engineer for ways to do this in an enterprise setting where you may have an authoritative CA that can vouch for your clients.
Final Thoughts..
There are still some issues if attempting to deploy this in a production environment. Transparent NAT security issues, issues with filtering by Source IP, SSL requiring SSL-Bump, etc. I will post another article soon when I have fine tuned these.
Cheers!
Sources:
- http://www.d90.us/toolbox/2009/05/26/adding-ssl-support-to-squid-package-on-ubuntu/
- http://wiki.squid-cache.org/SquidFaq/CompilingSquid#Do_you_have_pre-compiled_binaries_available.3F
- http://www.squid-cache.org/Doc/config/https_port/
- http://wiki.squid-cache.org/Features/DynamicSslCert
- http://wiki.squid-cache.org/Features/SslBump
- http://wiki.squid-cache.org/Features/HTTPS
- http://ubuntuforums.org/showthread.php?t=2049290
- http://www.mydlp.com/http-and-https-redirecting-with-netfilter-iptables/
- http://www.howtoforge.com/squid-delay-pools-bandwidth-management
i am successfully enable ssl bump on squid proxy ,is there any way to block consumer google account using squid ? http://support.google.com/a/bin/answer.py?hl=en&answer=1668854
Please help me if you are aware
I had to install the libssl-dev package as well to get squid3 to compile with SSL support.
Sir I wan’t to try your tutorial. But I have some questions. You said that there still some issues, including transaparent NAT. What if we used WPAD and PAC? and I think you forgot to enable ip v4 forwarding. I got this idea to other tutorials. Also the iptables above, are we going to run those using shell? or add them in /etc/iptables.up.rules? What’s the difference? Please enlighten us. Thanks
dear sayed please check this
http://img545.imageshack.us/img545/10/uxtf.jpg
so the images not work under https
I see the screenshot you are getting, what is the certificate being presented? Is it not trusted? You have to local install the untrusted cert onto your clients…or if a windows domain, thru the domain certificate authority.
yes it is not trusted then and if i confirm it then the sites not loaded in a success way
so i cant make https caching with out installing the certificate on client computers ??
here is the document of enable ssl inspection on squid
http://computech.in/2013/11/install-and-configure-squid-and-squdguard-on-centos/
its not worked under Debian the browser said the certificate is wrong and then sites not loaded successfully
Hi Hussein, see the comment I made above with your screenshot.
okay i replay thank you
Hi thejimmahknows,
my English is so poor but I hope you understand me. Before I thank you for you good tutorial
I had installed a debian wheezy and installed package like isc-dhcp and configure it, NAT.
so far everything is ok. cuz my local network accessed the internet easily and rapidly.
When I had installed Squid 3 and made it transparent. my local network accessed some website and for others like : facebook.com, youtube.com, nba.com. it’s very slow and take perharps 2 or 3 minutes to displayed and sometimes the page show Connection time out : 100
according to you,
a) What is the cause of the problem and what is the solution
b) I think that you had tested your tutorial. I would like to askt you, it’s your local network is very slow or it’s fast, I mean it’s your local network works very well.
c) if you tutorial works on debian wheezy 7.
Thank you very much
Hello jamy_i3,
Yes I also used Debian 7. I did not experience any notecable speed difference between http and https traffic. Are you doing SSL inspection?
thank for the good tutorial , my squid3 proxy ubuntu work ok but if my client open games facebook it not caching from my proxy because most of them are open in https .is it work for those ? sorry my english is bad thank for the reply ………
Thank you very muchhhhhhhhhhhhhhhhh. I got my Squid3.1.20 work right now. HTTP, HTTPS.
I also use SquidGuard to block all social Network websites also porns website.
Youpi
Thank you again.
Debian linux 7 Wheezy.
I got this error when compiling, it can be? thanks
make: *** [debian/stamp-makefile-build] Error 2
dpkg-buildpackage: error: debian/rules build gave error exit status 2
debuild: fatal error at line 1357:
dpkg-buildpackage -rfakeroot -D -us -uc -b failed
Could be a varity of build issues. Could you verify you have installed all the correct dependencies?
I want to redirect port 443 to Squid but client error: “There is a problem with this website’s security certificate.”
This is my command:
iptables: iptables -t nat -A PREROUTING -i eth1 -p tcp –dport 443 -j REDIRECT –to-port 3129
squid: https_port 3129 cert = path.file.crt key = path.file.key
Please help me solve this problem!
Can you provide more details/info into the error you are receiving?
When I access google or https: // from the client, the error message:
There is a problem with this website’s security certificate.
The security certificate presented by this website was not issued by a trusted certificate authority.
The security certificate presented by this website was issued for a different website address.
Security certificate problems may indicate an attempt to fool you or intercept data you send to the server.
We recommend that you close this webpage and do not continue to this website.
Recommended icon Click here to close this webpage.
Not recommended icon Continue to this website
More information
Make sure the computer you are connecting with has the Certificate signed for the Proxy server.
Hi, Great Post. I’m very new to all this stuff but now have a working Squid3 (v3.4.8) proxy with SSL support running on my Raspberry Pi which I got for Christmas and very good it is too. I added –enable-ssl-crtd to my build script and then configured my squid.conf for Dynamic SSL Certificate Generation as per the web reference at the bottom of your post, generated certificates and imported the .der certificate into my browser. This gets explicit proxying running without any of the certificate problem warnings. I’m not interested in filtering and so disabled icap. I also picked some bones out of this: http://sichent.wordpress.com/2014/01/02/web-filtering-https-traffic-on-raspberry-pi/ including modifying the scripts to use “jessie” throughout.
However I would really like to do transparent proxying using the “intercept” switch with https too. I’ve got the explicit proxy running fine with:
http_port 192.168.0.253:3128 ssl-bump generate-host-certificates=on dynamic_cert_mem_cache_size=4MB cert=/etc/squid3/certs/mypi.pem
but if I try:
http_port 192.168.0.253:3126 intercept
https_port 192.168.0.253:3127 intercept ssl-bump generate-host-certificates=on dynamic_cert_mem_cache_size=4MB cert=/etc/squid3/certs/mypi.pem
The http proxy runs transparently fine, but with https I get the annoying certificate problem security messages and some sites won’t work at all, like gmail.
so if I try just:
http_port 192.168.0.253:3127 intercept ssl-bump generate-host-certificates=on dynamic_cert_mem_cache_size=4MB cert=/etc/squid3/certs/mypi.pem
It doesn’t work at all and my browser says that the url is too long. The Squid documentation suggests it should work but it plainly doesn’t.
Plus, can anyone explain the difference (or which is better or more secure) between using DNAT and REDIRECT in my IPTABLES rules? I’ve tried both:
-A PREROUTING -i wlan0 -p tcp -m tcp –dport 80 -j REDIRECT –to-ports 3126
-A PREROUTING -i wlan0 -p tcp -m tcp –dport 443 -j REDIRECT –to-ports 3127
and
-A PREROUTING -i wlan0 -p tcp -m tcp –dport 80 -j DNAT –to-destination 192.168.0.253:3126
-A PREROUTING -i wlan0 -p tcp -m tcp –dport 443 -j DNAT –to-destination 192.168.0.253:3127
and cant see any differrence.
Thanks
I found my fix. With Squid 3.3 “ssl_bump server-first” was implemented and enables HTTPS to work transparently, ie with “intercept”. I’m running 3.4.8. I just changed the line in my squid.conf from “ssl_bump allow all” to “ssl_bump server-first”.
If you are running 3.3 or above “ssl_bump server-first” is the recommended option. See http://www.squid-cache.org/Doc/config/ssl_bump/
I still don’t get the difference between DNAT and REDIRECT in this case however. 🙂
Hi Richard. I’m glad you got it working. I believe the difference is REDIRECT is for port redirection, while DNAT is full Destination NAT’ing.
Caching HTTPS is the same as HTTP. Content has a max age that is used for either HTTP or HTTPS since it is the content delivery method.