Sunday, April 7, 2024

Vulnerability Management - Vulnerabilities vs Vulnerable Instance (Rapid7)

Rapid7 considers a vulnerability different from a vulnerable instance.


Vulnerabilities:

A “vulnerability” is a unique, defined, and publicly disclosed software weakness. Each vulnerability is typically identified by an enumeration system, barring a few exceptions based on the type of software. Although multiple enumeration systems exist, the Common Vulnerabilities and Exposures (CVE) system is the most widely used and accepted system today.

Vulnerability Instances:

A “vulnerability instance” refers to the specific condition on an asset that causes it to be vulnerable to a vulnerability. An asset can be vulnerable to the same vulnerability in multiple ways. Common causes for this scenario are:
  1. Having multiple versions of the same software installed on an asset at the same time; all of which are vulnerable to the same vulnerability.
  2. Being vulnerable to the same vulnerability through multiple network ports.
So just be careful when you are comparing numbers between raw reports and InsightVM's dashboards. Raw reports will always show more numbers (if you have selected 'Vulnerability Proof' and 'Service Port' column) than what the dashboard is showing. I did not observe this kind of distinction in Qualys and Tenable yet.

Please refer below URL for more details:
https://docs.rapid7.com/insightvm/vulnerability-metrics-explained/

Happy Learning
hashtagvulnerabilitymanagement hashtagcybersecurity

Vulnerability Management - Orphan Vulnerabilities

The vulnerabilities with unknown status are known as orphan vulnerabilities. So the question is why we cannot know the status of such vulnerabilities ?

Suppose you ran an authenticated scan against a server. Some vulnerabilities were detected which required authentication. Now, if because of some reason you discontinue to run authenticated scans then the scanner has no way to know whether the vulnerabilities detected in previous scan still exist or not. So, even if you remediate these vulnerabilities they will still exist in the database. The only way would be to manually purge the asset data or run authenticated scan once again. Even if an agent is installed on the server still the vulnerabilities will exist in the database (because VM solutions track data collected by a scanning appliance and an agent separately).

If you are scanning the same server with agent as well then the following action can be taken (in case of Rapid7 InsightVM):
You can enable complementary scanning (i.e. Scanner will skip authenticated checks wherever agent is installed).

Please refer below URLs for more details:
In case of Rapid7:
https://docs.rapid7.com/insightvm/using-the-insight-agent-with-insightvm/

In case of Qualys:
https://qualysguard.qg2.apps.qualys.com/qwebhelp/fo_portal/host_assets/agent_merge_data.htm

Happy Learning !!
hashtag

Vulnerability Management - Mass target vulnerability remediation

We all know about various prioritization techniques used for targeted vulnerability remediation.

But how will you bring down a huge number of vulnerability count ?

Following are the ways which I observed till now:
  1. Increase compliance percentage for patch management (ensure all the in scope assets are onboarded to patch management solution and patches are getting pushed regularly).
  2. Disable deprecated protocols such as SMBv1.0 and SMBv1.1 etc.
  3. Remove softwares which are no longer used in your environment.
  4. Decommission EOL operating systems (of course after running scream test).
I still think if you are not breaking anything then you are not remediating 😅. Joke apart, above mentioned points are easy wins or so called low hanging fruits hence easy to target.

Happy Learning !!
hashtag

Saturday, November 4, 2023

Vulnerability Management - Suppress Vulnerabilities

Today I will share with you a way by which you can ignore 99.9% of vulnerabilities in your environment. Ha ha ... Just kidding.


But on a serious note there are few vulnerabilities which you can suppress. Let's see them one by one.


1. SSL related vulnerabilities on systems in LAN network:


e.g. SSL Certificate Cannot Be Trusted (https://www.tenable.com/plugins/nessus/51192)

e.g. SSL Self-Signed Certificate (https://www.tenable.com/plugins/nessus/57582)

e.g. SSL Certificate with Wrong Hostname (https://www.tenable.com/plugins/nessus/45411


Reason -> Organizations use use self-signed certificates for systems in LAN.


2. Vulnerabilities which are difficult to exploit due to enforcement of policy

e.g. Microsoft Office Trust Access to VBA Project Model Object Enabled (https://www.tenable.com/plugins/nessus/123461)


Reason -> VBA can be disabled using GPO.


3. Vulnerabilities due to how a OS vendor handles their patching regime and discloses vulnerabilities

e.g. CentOS vulnerabilities on Tenable Core not being mitigated (https://community.tenable.com/s/article/CentOS-vulnerabilities-on-Tenable-Core-not-being-mitigated?language=en_US)


4. In almost all organizations patching on Windows servers is done via various patching tools (not via automatic updates)

e.g. MS KB3119884: Improperly Issued Digital Certificates Could Allow Spoofing (https://www.tenable.com/plugins/nessus/87313)

The plugin was flagged on Windows 2012 R2 servers but was fixed in Windows server 2016 


5. Non availability of patches from OS vendors


e.g. Curl 7.84 <= 8.2.1 Header DoS (CVE-2023-38039) for Windows 10 and Windows 11 OS (https://learn.microsoft.com/en-us/answers/questions/1387774/curl-7-84-(-8-2-1-header-dos-(cve-2023-38039)-for)

(https://www.tenable.com/plugins/nessus/181409)


e.g. Curl 7.69 < 8.4.0 Heap Buffer Overflow (https://www.tenable.com/plugins/nessus/182875)


Reason -> Platform support teams will not update packages from open source projects as it might break things and they will not get vendor support.


I know this is not much but as the saying goes "a little help is worth much more for the wretched". Ultimately you need to use EPSS, CISA KEV, and various threat intel sources for prioritization to reduce actionable vulnerabilities.


Happy Learning !!

Vulnerability Management - Scanning approach to Load Balancers

As we all know how much important are high availability solutions these days. A load balancer (LB) is one such system which provides high availability apart from various other features such as security, scalability and performance. 

A LB is a device or software that sits between clients and servers in a network. It distributes incoming traffic across multiple servers to ensure that the load is balanced and network services remain available. LBs are by their very nature intended to hide what is behind them. 

But scanning through a LB can create unwanted results. As it directs network traffic intelligently among multiple servers, when you scan THROUGH a LB using a VIP, you will get different results for the same VIP address for multiple scans. 

Following issues may arise while scanning through LB:

  • Scanning LBs will show any vulnerabilities of the LBs themselves, which may lead you to thinking that the vulnerability is on the actual server when it is not.
  • Scanning through LBs, assuming there are multiple servers behind those LBs, may give you different results each time you scan the IP. For example, the first scan you hit Server1, then second scan you hit Server2. If those servers are not completely the same the results can show variations.
  • Suppose you are scanning a /24 subnet with 10 assets or so, due to high intensity of the scan, LBs may go into hardware protection mode and just send a reply for every single query that a scanner makes to it. This will result in 255 assets showing alive. 

Hence, you should never scan THROUGH a LB. Either deploy agents, or place a scanner on the inside network of the LB. To scan a LB itself you would need to use its management IP address. 

When scanning using a Virtual IP Address (VIP), currently from scanning solution's perspective, there isn't a way to tell whether an IP address is a VIP or not. You would need to write a script to pull the configs from the LBs and pull the VIPs. 

Please refer the below URLs for more details:

What is a LB? (https://aws.amazon.com/what-is/load-balancing/#:~:text=Load%20balancers%20increase%20the%20fault,or%20upgrades%20without%20application%20downtime)

Scanning approach to LBs (https://community.tenable.com/s/question/0D5f200005YPgFsCAL/scanning-approach-to-load-balancers?language=en_US)

What is a Virtual IP Address (VIP)? (https://www.pubconcierge.com/blog/virtual-ip-what-is-it-and-how-it-works/


Happy Learning !!

Tuesday, October 3, 2023

Vulnerability Management - Vulnerability Dashboard using Power BI

Was playing with Power BI today. Created a simple dashboard using CISA KEV vulnerability data from https://nucleussec.com/cisa-kev/ (Nucleus Security)

What’s the difference between Power BI and Excel?
Will not comment rather I would say "What’s the difference between an alligator and a crocodile? You’ll see one later and one in a while." 😁

Happy Learning !!

CyberSecurity - Why do we need standard data formats ?

As we all know there are data formats for various standards related to storage, representation and exchange of information in CyberSecurity domain for e.g.

For

1. Vulnerability - CVE

2. Platform - CPE, SWID and PURL

3. Configuration - CCE

4. Vulnerability Scoring - CVSS

5. SBOM - CycloneDX and SPDX

6. Identity Information - SAML and JWT

7. Malware Information - MAEC and MISP

8. Threat Information - STIX and TAXII

9. Log File - CSV, JSON, KVP (Key Value Pair) and CEF (Common Event Format)


and the list goes on.


Standard data formats are necessary because of the following reasons:

1. Enables correlation, integration and automation

2. Exchanging information among security vendors; among security researchers

3. Allows for the faster development of countermeasures (signatures and security patches)

4. Reduces potential duplication of malware and vulnerability analysis efforts by researchers


Happy Learning !!

Vulnerability Management - Basics for beginners

Beginners in Vulnerability Management domain have doubts such as from where to begin or what to study. I have created a document and tried to answer such doubts. It is always good to learn basics and then move towards advanced concepts. I have tried to provide links to corresponding points in the document as much as I can. Cases where you don't find any link or the link present is expired, you can always google :).


Following are the points I want to highlight through this post:

  1. For beginners please don't try to search interview questions directly. First create a theoretical base and realize the concepts by performing practicals.
  2. Slow and steady wins the race, so, give 4-6 months of time. While going through the document you can observe, 40%-60% concepts are basics, hence you will not be wasting time by learning them. After some time if you don't find Vulnerability Management interesting, you can always navigate to other subdomains like incident response and penetration testing.
  3. You will get hands-on on enterprise solutions once you join an organization. You will face a different set of challenges there. Many on LinkedIn create posts to address such challenges but first clear your basics to understand such posts/articles.
  4. Once you have performed above steps, you can search interview questions and start appearing for interviews.
  5. I do not recommend directly going for global certifications as a lot of content is there on internet.


Finally, I find articles from Balint F. very interesting.

https://www.linkedin.com/in/balint-fazakas/recent-activity/articles/


Happy Learning !!


Monday, September 18, 2023

Vulnerability Management - Correlation using CPE

As we all know correlation is a very important aspect of vulnerability scanning. While performing a vulnerability scan, first assets are identified. Then a corresponding CPE is identified for each identified asset. Then CVEs are mapped to identified CPEs and thus vulnerabilities are correlated to identified assets.


CPE names are created on an as needed basis meaning CPEs are only generated when a CVE is released and the vulnerable target does not have an existing CPE. This implies the absence of a matched CPE name also indicates the absence of any issues.


The root of the problem is that to generate a useful CPE for a software component it needs to be predictably created and totally unique in order to match it against a central database which is then in-turn mapped to known vulnerabilities. The two fundamental limiting issues are:

  1. No central control over the naming of open-source components (i.e. not unique and predictable)
  2. The pace and manner in which components are created makes a central dictionary impractical


Use of CPEs for correlation has introduced false positives as well as false negatives. Following are the drawbacks of using CPE for correlation:

  1. There are CVEs which are not mapped to CPEs.
  2. There are assets which are not mapped to CPEs. 
  3. According to CPE naming specification, version 2.3 representing user-defined configurations of installed products is out of scope.
  4. CPE has no provisions to tell you whether a vulnerable extension is installed or not.


Please refer below URLs for more details:

https://owasp.org/www-project-web-security-testing-guide/latest/5-Reporting/02-Naming_Schemes

https://www.arxiv-vanity.com/papers/1705.05347/

https://www.veracode.com/blog/managing-appsec/using-cpes-open-source-vulnerabilities-think-again


Solution: Switching to more evolved naming schemes such as SWID and PURL.

https://owasp.org/assets/files/posts/A%20Proposal%20to%20Operationalize%20Component%20Identification%20for%20Vulnerability%20Management.pdf


Happy Learning !!

Sunday, August 20, 2023

Vulnerability Management - Interview Questions

Most of the questions are straightforward. You can google 90% of the questions. 

Happy Learning !!

Vulnerability Management - SCAP and DISA STIG

As free versions of commercial vulnerability management vendors do not provide compliance scanning options, we can use freely available tools such as SCC (DISA) and CAT Lite (CIS) from learning perspective.


I have created a document depicting the use of SCC from DISA. You can google basic concepts such as CIS, DISA, STIG, SRG, CIS Benchmarks, SCAP, OVAL, CVE, CPE, XCCDF, CCE and OCIL etc.


I find Wikipedia definition perfect, "SCAP comprises a number of open standards that are widely used to enumerate software flaws and configuration issues related to security. Applications which conduct security monitoring use the standards when measuring systems to find vulnerabilities, and offer methods to score those findings in order to evaluate the possible impact. The SCAP suite of specifications standardize the nomenclature and formats used by these automated vulnerability management, measurement, and policy compliance products.


A vendor of a computer system configuration scanner can get their product validated against SCAP, demonstrating that it will interoperate with other scanners and express the scan results in a standardized way." 


Point of this post was as free tools, resources and videos are available, let's make use of these and come out of the mindset that we can learn only when we will join some organization. I agree, there is no substitute to industry experience but nobody can stop you from learning. Let's create labs, read documentation, demonstrate PoCs, and share the gained knowledge with community. Thanks to all content creators on YouTube and Linkedin, I have learned a lot from you and I am still learning.


Happy Learning !!


Vulnerability Management - SSL vs TLS

 In one of the interviews, interviewer asked me "What is the difference between SSL and TLS ?"


I said TLS is successor of SSL. But he was not satisfied with my one line answer. Then he asked "Does that mean there is no difference between SSL and TLS ?"


So guys following are the high level differences between SSL and TLS:


  1. Hashing --> SSL uses MD5, SHA-1 while TLS uses SHA-256
  2. Key exchange algorithm --> SSL uses KEA while TLS uses DH, ECDH, DHE, ECDHE, PSK etc.
  3. Data encryption --> SSL uses DES, RC4 etc. while TLS uses AES etc.
  4. Integrity --> SSL uses MAC while TLS uses HMAC


The point of this post is, don't be like me, be like Bob.

Whenever Bob studies a concept, he always asks himself WHY ? (For e.g. in this case) "Why TLS is needed when SSL is already there ?"


So, being in cybersecurity domain, you are not always expected to know low level details of each and every protocol but atleast you should know high level details and be able to corelate your answer with vulnerabilities (For e.g. in this case, SSL3.0 is vulnerable to BEAST and POODLE attacks whereas TLS1.0 fixes them).


Happy Learning !!

Monday, August 7, 2023

Vulnerability Management - Secure privileged account use

As performing a vulnerability scan or audit with an account lacking sufficient privileges may result in incomplete result, scanning solutions must be provided with privileged authentication and access levels to access the end system.


Since accounts used are privileged ones, following are the strategies Tenable recommends to avoid any kind of misuse:


1. Implement compensating controls for privileged accounts to limit risk, such as:


a. Log monitoring for when the account is in use outside of standard change control hours, with alerts for activities outside of normal windows.

b. Perform frequent password rotation for privileged accounts more often than the “normal” internal standard.

c. Enable accounts only when the time window for scans is active; disable accounts at other times.

d. On non-Windows systems, do not allow remote root logins. Configure your scans to utilize escalation such as su, sudo, pbrun, .k5login, or dzdo.

e. Use key authentication instead of password authentication.


2. Use Nessus Agents where available.


3. If you do not grant an exception with compensating controls, perform a scan with an account having lower privileges than what Tenable recommends and observe any missing results. Modify the account privileges so that all expected results are shown. Changes to the audit file or plugins may impact results later.


Please refer below URLs for more details:

https://docs.tenable.com/nessus/compliance-checks-reference/Content/CredentialedScanningandPrivilegedAccountUse.htm


Happy Learning !!

Vulnerability Management - On a lighter note !!

Few nessus agents were appearing offline in Nessus Manager. I was working with a Windows engineer to troubleshoot the issue.


Normally, to resolve this issue, we try to unlink and re-link the Nessus Agent, from the Agent host.


As an administrator, from a command prompt, run the following commands:

Based on your operating system, use either C:\Program Files\Tenable\Nessus Agent or C:\ProgramData\Tenable\Nessus Agent.


> net stop "Tenable Nessus Agent"

> "C:\Program Files\Tenable\Nessus Agent\nessuscli" agent unlink (--force)

> "C:\Program Files\Tenable\Nessus Agent\nessuscli" agent link --key=<key> --host=<host> --port=<port>

> net start "Tenable Nessus Agent"


I shared the steps with the engineer and asked him to execute them. While he was running those commands, suddenly he asked me, if we are stopping the "Tenable Nessus Agent" service then what is the use of running subsequent commands ? I went blank for a couple of minutes.


Happy Learning !!

Vulnerability Management - Duplicate entries in vulnerability database

 Let's first understand what a reimage is ->


A reimage is the process of installing a new operating system on a machine. This process includes wiping, or clearing, the hard drive entirely, and installing a fresh operating system. When the reimage is complete, it is almost like getting a brand new machine!


Now, both Qualys and Tenable stamp a machine with a tracking UUID the first time they scan it. This way, if a machine changes IP addresses, or has multiple network interfaces, they can track the machine without creating duplicates.


But when you reimage a machine, Identification Attributes change, which in turn means same vulnerability will be repeated. How ? Suppose a developer is using a particular version of a library. This particular version is affected with a vulnerability. The developer after working for few days decides to move out of the organization. The machine is sent to reimage. Now, another developer gets this machine. When Tenable agent is installed, it creates a new UUID. If the new developer installs and uses the same library then a duplicate entry will be created.


As the machine was using the same mac address and hostname, for the particular instance (Hostname + Port + Vulnerability), there will be two entries (two UUIDs). Tenable considered the machine as two different machines. Hence solution to such issue is:


Save the whole key before you rebuild the machine, then restore the key before you re-scan it, or before you install the Qualys/Tenable agent if you use agents (Qualys stores its UUID in the Registry, in HKLM\Software\Qualys, Tenable stores its UUID in HKLM\Software\Tenable).


Please refer below URLs for more details:

https://dfarq.homeip.net/rebuild-machines-without-making-duplicates-in-qualys-or-tenable/#ixzz86yZsaJIb

https://community.tenable.com/s/article/How-Does-Tenable-io-Identify-an-Asset-as-Unique

https://hub.wpi.edu/article/183/prepare-a-computer-for-reimage


For duplication, there are other use cases also. This is just one of the use case.


Happy Learning !!

Tuesday, May 23, 2023

Vulnerability Management - Scanning using Nessus Essentials

Prepared a document on scanning a Windows 10 workstation using Nessus Essentials. It is a free version of Nessus vulnerability scanner hence you cannot perform compliance checks. However, purpose of creating the document was to let you know how you can scan your workstation successfully. Normally, if you enter correct credentials in scan policy, authentication will be successful still the scanner will not be able to gather much data. Certain registry changes are required and few services need to be enabled for a successful scan. 


Downloading and installing Nessus Essentials is good to learn but the main part is about registry changes and services. One should understand why they need to be enabled and their significance.


I have added references to videos and websites I referred in the document.

 

Happy Learning !!


Tuesday, May 16, 2023

Cybersecurity - Architect vs Engineer vs Analyst

Cybersecurity Architect - One who decides how security is done and has holistic view of an organization's architecture (Expert in multiple domains)

For e.g. One who knows where firewall, AV, SIEM, VM, IDS/IPS, proxy, DLP etc. should be placed. Decides what policies should govern operation of aforementioned tools.

--> More involved in decision making


Cybersecurity Engineer - One who follows a given design and builds/engineers solutions.

For e.g. One who actually deploys and maintains (Deploy/Configure/Upgrade/Troubleshoot/Decommission) above mentioned solutions. 

--> More involved in implementation


Cybersecurity Analyst - One who uses data generated by these solutions to ensure cybersecurity. Provides feedback and reports issues to engineers based on which engineers finetune solutions.

For e.g. One who works on dashboards, alerts, incidents, reports etc. generated by above mentioned cybersecurity solutions.

--> More involved in analysis


Each role has its challenges, hence one should not reach to immediate conclusion about betterness of these roles over one another.

 

Happy Learning !!


Vulnerability Management - BAU (Business As Usual) and Ad hoc tasks

Once scanners are deployed, scans are scheduled and, reports are configured, one may ask, then what tasks vulnerability analysts/engineers perform?


So following are the tasks which are operationalized:


1. Authentication - Troubleshoot authentication issues.

2. Scan Coverage - Ensure scope for vulnerability scanning (cloud and on-premise) is defined and you are covering all the systems in scope. This may require co-relation with CMDB.

3. Offline Agents - Ensure the agents which are offline are of systems which are decommissioned. Agents can go offline due to variety of other reasons as well.

4. Reports - Normally, a lot of vulnerabilities are not patched by patching teams such as vulnerabilities related to 3rd party applications. They will patch vulnerabilities related to OS and corresponding native applications. Hence, you will always face requests to transfer ownership of such vulnerabilities. So, you will need to make changes to reports frequently if not regularly. You will also need to analyze reports to find assets which are not patched regularly (You can find out assets which are not onboarded in patching tools). 

5. Policy Fine Tuning - There are vulnerabilities which require particular settings to detect them. Similarly, for compliance, there are controls which require modifications depending on the environment.

6. Managing False +ves and Exceptions - As scanning solutions have limitations hence, false +ves and exceptions will generate.

7. Rescan and Decommission requests - These tasks are performed on regular basis.

8. Weekly/Monthly calls with various stakeholders - Normally, current status of remediation efforts and challenges faced by application/platform teams are discussed.

9. Penetration and Audit findings - You will need to work with various teams to fix these findings.

10. VM Policy - Every organization has a policy where SLAs and critical assets are defined. You will need to create/fine tune such policies.

11. Deliver Trainings - As attack surface is ever evolving, you will need to give regular trainings on cybersecurity best practices (Phishing/Shift left approach/Safe browsing etc.).

12. Sync with TI - Be in sync with Threat Intelligence team and prioritize remediation according to their inputs.


Following are the tasks which are done on Ad hoc basis:


1. Scanner/Manager upgrade.

2. Troubleshoot connectivity issues between scanner and manager or agents and manager.

3. Deploy new scanner/manager

4. Integration with various tools such as the following:

  • ITSM (e.g. ServiceNow)   
  • CMDB (e.g. ManageEngine)
  • Risk Assessment (e.g. Kenna)

5. Task automation (e.g. Scripting using Python or VB)

 

Happy Learning !!

Vulnerability Management - Paranoid Mode

In Tenable, detection of few plugins require paranoid mode to be enabled.


It allows a user to specify whether or not we should only report vulnerabilities with a high level of confidence, or be a little more paranoid and flag a system if there is possibility they are or could be vulnerable. It can lead to potential false positives but can give a larger view of their cyber exposure.


Generally, when paranoid mode is enabled, number of vulnerabilities detected will increase. Following are the few reasons:

1. Backported patches are ignored: When applications are backported by package maintainers, the version displayed when installing through a package manager may differ than a package downloaded directly from a vendor. When Paranoid Mode is enabled, backported patches will not be considered, resulting in a false positive for the 'missing' patch.


2. Some plugins (depending on how the detection is performed) may only have version information to work with, and not specific configuration information about the host. Often a vulnerability may only exist if a specific configuration is enabled and if the plugin cannot gather this info, paranoid mode is used. A common example of this is Cisco configurations noted in their advisories. In these instances you may see a false positive.


3. When a plugin is performing a direct check against a host, such as directly exploiting a certain vulnerability, this could lead to potential false positives due to the nature of the vulnerability. For example if we have to rely on an HTTP response header to determine if an exploit was successful, this could lead to a false positive for an unaffected device, or an IDS/IPS/Firewall could alter the response.


Please refer below URLs for more details:

https://community.tenable.com/s/article/How-to-know-when-a-plugin-is-made-paranoid

https://community.tenable.com/s/article/Which-plugins-require-the-paranoia-setting

https://community.tenable.com/s/article/How-does-Show-potential-false-alarms-impact-a-scan-scanning-in-paranoid-mode


Happy Learning !!

Vulnerability Management - Analyze before upgrading versions

Microsoft released a security update for .NET core on December 2022. Tenable also released a signature to detect the update (Plugin ID 168747 https://www.tenable.com/plugins/nessus/168747). Solution was, "Update .NET Core Runtime to version 3.1.32 or 6.0.12 or 7.0.1." Now, if you carefully go through Microsoft's support policy (https://dotnet.microsoft.com/en-us/platform/support/policy/dotnet-core), you can observe, version 3.1.32 became EoL on December 13th 2022.


If you are below 3.1.32, then obviously the statement "Update .NET Core Runtime to version 3.1.32 or 6.0.12 or 7.0.1." makes sense. But, if you apply the security update to mitigate one detection, another detection of 3.1.32 being EoL will follow you soon. Tenable released a signature to detect .NET core EoL versions on 7th March 2023 (https://www.tenable.com/plugins/nessus/172177).


As a patch analyst, you should not say, I brought old versions of .NET Core to 3.1.32 with huge effort and now you are telling that 3.1.32 became EoL. Please remember, scanning vendors, will not report multiple solutions in one finding. If a version misses a security update, it is a separate finding than the version itself becoming EoL, and hence solutions will also be different. So, one has to be aware of product lifecycle before applying patches.   


So, before updating a software, please check, when the version of the software to which you want to update, is going to become EoL. If it is going to become EoL in coming 2-4 months, you might want to go for major upgrade.


Happy Learning !!

Vulnerability Management - To be or not to be (A false positive)

I wanted to discuss a situation where you clueless !!


VMware released an advisory consisting security updates for vulnerabilities @CVE-2022-31696 and CVE-2022-31699 (https://www.vmware.com/security/advisories/VMSA-2022-0030.html).


Now, Tenable published a signature to detect the vulnerability @168828 (https://www.tenable.com/plugins/nessus/168828).


If you check response matrix for CVE-2022-31696 for ESXi 7.0 in VMware's advisory, it says, fixed version is ESXi70U3si-20841705 ("05"). If you click on the link, it will take you to ESXi 7.0 release notes (https://docs.vmware.com/en/VMware-vSphere/7.0/rn/vsphere-esxi-70u3i-release-notes.html). Now if you scroll down a bit, you can see the download is for ESXi70U3si-20841708 ("08"). The difference between two versions i.e. "05" and "08", is of component and driver updates. "05" has security updates only while "08" has security, component and driver updates. I observed, in few cases, due to hardware dependencies, support teams are not able to upgrade to "08". But, Tenable is detecting "05" version as vulnerable. Since "05" already has security updates, support teams are claiming this detection as false positive.


Now, in counter, Tenable is saying, when go from VMware's advisory to release notes, the download file is for "08". So this is not a false positive. Either there is a typo from VMware or "08" is the correct build.


So yes, if you understood the scenario, such cases also occur. Our support team has reached out to VMware already but not sure how much time VMware will take to address this issue.  


Not sure what VMware and Tenable will do, in the interim we are trying to make peace with the support team (ha ha).


Happy Learning !!

Vulnerability Management - Tenable.sc First Discovered Date Fluctuating



Some common environmental factors which will cause the first discovered dates to fluctuate are:


1. Targeting a system by FQDN or hostname when that name could resolve to multiple IPs. Two common examples of this are a system that is behind a load balancer or a system that has multiple NICs. Customers should be working with their network and/or DNS admins to determine if this is a possibility for the primary DNS server used by Tenable.sc, which can be found in /etc/resolv.conf.


2. Assigning systems a new IP address via DHCP when the dhcpTracking setting is not uniform across all scans for the organization.


3. Assigning the same IP to multiple systems in different networks and importing the scan results into the same repository. If 172.26.0.1 in network A is a different system than 172.26.0.1 in network B and each are scanned, Tenable.sc will consider them one system and the vulnerability data may not appear accurate due to the differences in the target systems. Customers should be working with their network and/or DNS admins to determine if this is a possibility in the environment.


4. Deploying virtual systems from the same template / image without adjusting the underlying network settings. Any duplication of FQDN, MAC, or NetBIOS across different systems will prevent Tenable.sc from uniquely identifying them, causing all the vulnerability data to collide under the same IP.


Please refer below URL on how one can normalize this behavior:

https://community.tenable.com/s/article/Tenable-sc-First-Discovered-Date-Fluctuating


Happy Learning !!

Vulnerability Management - Scan with low privileged service account

Recently encountered a situation where few false positives appeared on some Cisco devices. Whenever false positives appear, first thing to check is authentication. If it is happening properly then you should go for authorization check.


Now in this case, scan happened with privilege level 1. After investigation, we found that, the service account in use was part of multiple AD groups (ISE is integrated with AD). This created a conflict in privilege level and low privilege was chosen. This in turn resulted in false positives on some Cisco devices.


Hence always ensure, the service accounts dedicated for vulnerability scanning should not be part of any irrelevant groups.


Happy Learning !!

Vulnerability Management - Detection based on application's self-reported version number

Let's understand one use case. 


Apache published a new version of HTTP server i.e. v2.4.55 (https://httpd.apache.org/download.cgi). Now, if you check the version of Apache HTTP servers which are shipped with Redhat, you would find upstream versions are on or below v2.4.53 (https://access.redhat.com/solutions/445713).


Now, Tenable started detecting versions 2.4.x < 2.4.55 as vulnerable to CVE-2006-20001, CVE-2022-36760, CVE-2022-37436 (https://www.tenable.com/plugins/nessus/170113). But these web servers are only vulnerable when "mod_proxy_ajp" module is in use. Solution to this vulnerability is to disable the module (https://access.redhat.com/security/cve/cve-2022-36760). However, even after disabling the module, Tenable will continue to flag the vulnerability as it is not checking whether the module is in use or not.


You can use the command "apache2ctl -M" to check for loaded modules of Apache in Linux systems (https://www.tecmint.com/check-apache-modules-enabled/).


Hence, from my perspective you have the following options:

1. Recast the risk of the vulnerability from "Critical" to "Medium"/"Low" so that it doesn't get counted in any KPI metric

2. Provide temporary exception for 4 or 6 months


So, whenever you come across detections based on application's self-reported version number, be vigilant and perform proper vulnerability analysis.


Happy Learning !!

Vulnerability Management - Skipping scheduled daily agent scans

Many organizations have employees working in shifts. Hence, while running agent based scans, it becomes important to manage coverage of all in scope workstations. Normally, to cover them, scan window is set to 24 hours.


But because of this scan window, scheduled daily agent scans will be skipping. Since the scan will start a few minutes later than scheduled there is a chance that the next scheduled daily scan will not start because the previously scheduled occurrence is still running.


Solution to this issue is to change scan window to 1380 minutes (23 hours). This gives some padding for post processing to complete before the next scan launches avoiding overlapping of scan jobs which results to scheduled scans skipping.


Happy Learning !!

Threat Intelligence vs Threat Hunting vs Threat Modeling

Threat Intelligence -> Data that is collected, processed, and analyzed to understand a threat actor’s motives, targets, and attack behaviors.


Threat Hunting -> Process of proactively and iteratively searching through networks to detect and isolate advanced threats that evade existing security solutions.


Threat Modeling -> Threat modeling works to identify, communicate, and understand threats and mitigations within the context of protecting something of value.


Please refer the below links for more information:

https://www.crowdstrike.com/cybersecurity-101/threat-intelligence/

https://owasp.org/www-community/Threat_Modeling


Happy Learning !!

Vulnerability Management - Nessus Knowledgebase

A Knowledgebase (KB) is created for each target during a Nessus scan. When a plugin collects information that needs to be "shared" with other plugins it is stored in the KB for that host. The KB can be found for a specific host in the Host Details section of the scan results reached by drilling down on that host.


Note: Nessus also collects a global KB that shares information not only between different scripts but between different hosts.


KBs are in the following format:

timestamp data_type key=value

-> timestamp: Epoch time representing when a scan completes

-> data_type: 1 is for strings, 3 is for integers


For e.g. 1475164035 3 portscanner/14272/Ports/tcp/1334=1


There are several functions used by plugins to read or write information to the KB:


1. set_kb_item(): Adds a new item in the host knowledge base. The value can either be a string or an integer. If an item with the same name already exists in the KB, it's unaffected as the KB can have the same key listed multiple times.


2. set_global_kb_item(): Adds a new item in the global knowledge base. The value can either be a string or an integer. If an item with the same name already exists in the KB, it's unaffected as the KB can have the same key listed multiple times.


3. replace_kb_item(): Same as set_kb_item() except it will replace the value found in the key.


4. get_kb_item(): Fetches the value of the key in the KB (all of them if more than one exists) and returns the result.


5. get_global_kb_item(): Fetches the value of the key in the global KB (only the first value if more than one exists) and returns the result.


6. rm_kb_item(): Deletes a KB entry. If multiple entries exist, specifying a value makes the function only delete the entry for that specific value.


7. get_kb_list(): Returns the list of values for KB keys matching a certain pattern (e.g. "SMB/Registry/*")


8. get_global_kb_list(): Same as get_kb_list but for the global KB.


Please refer the below link for more information:

https://community.tenable.com/s/article/What-is-the-Nessus-Knowledgebase-KB


Happy Learning !!

 

Vulnerability Management - Firewall Detection

NMAP has several techniques for firewall detection. Since enterprise networks have large subnets, it is not practical to employ such advanced techniques for scanning an environment with 10000+ assets.


Below is a general method used by Qualys.


When there is no firewall between a scanner and a target host, all TCP packets sent by the scanner to the target host should trigger a reply packet from the target host. When there is a firewall, this is no longer true. There are two general firewall behaviors that Qualys relies on for this detection:

-> No reply (silently dropped)

-> Connection reset (RST)


With regard to the first behavior, some firewalls will drop TCP SYN packets sent to certain ports. In this case, the TCP SYN packets sent by the scanner to these ports will not generate a reply. So when we send SYN packets to the target host and do not receive a reply, we know there is a firewall.


With regard to the second behavior, other firewalls will respond to TCP SYN packets sent to certain ports with RST packets on behalf of the target host. To detect this type of firewall, Qualys analyzes the TTL values of the RST reply packets (from the firewall) and the SYN-ACK packets (from the target host). This method requires that the firewall allows SYN packets to some ports to go through and reach the target hosts while resending SYN packets to other ports on behalf of the target host.


False positives can come when network conditions are bad leading to packets being dropped. You can choose to disable TCP ping method or consider ICMP unreachable messages as a sign of dead host for proxy ARP replies.


Please refer the below links for more information:

https://success.qualys.com/support/s/article/000006102

https://docs.rapid7.com/nexpose/configuring-asset-discovery/#collecting-information-about-discovered-assets

https://community.tenable.com/s/article/Scan-is-returning-results-for-IPs-which-are-known-to-be-dead-or-non-existing

https://nmap.org/book/firewalls.html

https://www.tenable.com/blog/4-ways-to-improve-nessus-scans-through-firewalls  


Happy Learning !!

Vulnerability Management - Understanding vulnerability posture

Understanding the vulnerability posture of an organisation at a basic level helps you drive remediation efforts. So, I don't know what t...