Content filtering explained
What is content filtering?
Content filters are some of the most commonly utilized security solutions for businesses and consumers. Their main function is to block websites, emails, or file types (usually executables) that could be unproductive or harmful if accessed. Restrictions may be based on the criteria of corporate acceptable use policies or compliance regulations, such as the Children’s Internet Protection Act (CIPA). Content filtering can be deployed using a hardware appliance or using software, and it is often included as a feature of Internet firewalls. There is an industry trend towards the adoption of cloud-based content filtering solutions, because they require less computing power from local machines and because remote third-party servers can often deliver more comprehensive content filtering.
Many content filtering solutions offer administrators the ability to apply more granular controls, such as department-specific filters. For example, a company may decide that external file-sharing products like DropboxTM or Google Drive will be allowed for sales and marketing employees who require a lot of communication and collaboration with outside stakeholders. By contrast, the company may block these file sharing services for the rest of the company or for departments that work closely with sensitive data such as IT, human resources, or finance.
Content filtering use cases
Defending against phishing attacks
Content filtering can help prevent social engineering and fraud, especially phishing attacks. Phishing websites (also known as “spoofed websites”) and emails are created by cyber attackers to impersonate trusted organizations in order to acquire sensitive data, such as login credentials. For example, a cybercriminal could create a website that looks like it belongs to a bank that encourages their victim to log in with their online banking credentials. Once the credentials are acquired, they can be used to transfer money out of the account.
When phishing websites become known to web content filters by their URL or by their content, visits to those sites can be prevented. Some content filtering solutions access databases of known phishing sources and heuristics that can detect phishing to protect employees from being vulnerable to them. These databases and correlation rules are created thanks to the dedication and hard work of security researchers who perform threat analysis. Content controls can use these threat databases and correlation rules to create smarter and more fool-proof filtering logic.
Protecting sensitive data and personal information
Another use for content filters is to help prevent sensitive data from being leaked to unauthorized people or organizations. Email content filtering can limit the email addresses that a worker can send emails to, so sensitive data is only sent to recipients that are authorized by the company. If web content filters are configured properly, they can help prevent sensitive information like social security numbers or private mailing addresses from being displayed or otherwise logged. That can assist with data privacy regulatory compliance, which many industries are subject to.
The evolution of content filtering technology
Content filtering has evolved over time. More recent content filters have replaced Bayesian analysis with artificial intelligence (AI) machine learning. Bayesian analysis is a way of applying statistical inference with Bayes’ theorem to update the probability for a hypothesis as more information becomes available. So, administrators tweak the content filters with new rules based on what they’ve learned about undesirable web or email content. But with machine learning, artificial intelligence behind the content filter learns how to identify undesirable content on its own over time, through use. It can be a more effective way for a content filter to improve its effectiveness.
Bayesian poisoning is a technique used by email spammers to bypass Bayesian email filtering techniques. Email spammers also sometimes engage in image spam, which puts the content of spam email in an embedded image. Punycode attacks are a type of character set manipulation that uses Unicode characters that look similar to ASCII characters to bypass filtering that uses certain keywords. Recent email content filtering based on machine learning can help prevent those attacks with heuristic content analysis.
Companies in various industries are shifting their focus from workplace compliance, productivity control, and bandwidth restrictions to preventing cyber-attacks like malware, phishing, and SQL injection. Modern content filters based on artificial intelligence are getting smarter about identifying possible cyber-attacks from email and web vectors.
Content filtering versus URL filtering
It’s easy to confuse content filtering with URL filtering, but URL filtering is just one form of content filtering. Content filters can work by blocking keywords, file types, malware correlations, or contextual themes of content resources. By contrast, URL filters are simply one form of content filter that block content based on the string, path, or general contents of a URL. Similar to content filtering in general, URL filters can utilize malware databases containing known website URLs related to phishing, malware, adult content, and etcetera.
Who should be using security content controls?
Any organization that allows its employees to use the web and email on company-owned computing devices could benefit from content filtering solutions to help enforce internal company policies, to add an extra layer of security against malware, and to help comply with industry regulations. Content filtering can help businesses increase productivity by blocking time-wasting websites that distract employees from focusing on their core job responsibilities.
Some publicly funded organizations, such as schools and libraries, use content filtering to demonstrate compliance with mandates such as CIPA by blocking students’ access to inappropriate websites or general content on their phones, tablets, and PCs. Thus, businesses facing these types of compliance needs will deploy content filtering solutions that are specifically designed to target and restrict content types such as pornography, gambling and violent content.
Help protect against web-based threats with AT&T Global Security Gateway
AT&T Global Security Gateway offers unified protection against web-based threats across office, remote, and mobile users. It is fully managed 24x7 by the AT&T Security Network Operations Center with service options that fit the needs of small businesses to enterprises and supports both cloud and hybrid environments. For more information about the solution, visit the Global Security Gateway page.
To get sales help from an AT&T Cybersecurity specialist, please complete this form.