Reconnaissance information is of critical importance. The proper collection and classification will help with decerning targeted analysis. This in turn will establish the foundation of models for testing and will help with the general strategy for planning for how to execute an operation. Today we are covering the handling of the analyzing of data, how to properly set up a test environment, finding vulnerabilities, and establishing our strategy. This article is for educational purposes only and I am not responsible for your actions.
If you followed along with the recon article and have gathered some much crucial intel on the target system(s). This is where those devices that we grabbed right after the new year(article link) come into play. With our reconnaissance, we can derive if they are running on x86, arm, or some other architecture. With this information, we can set up a proper testing environment to flesh out a strategy to execute. Not to get ahead of ourselves here, there still is a lot of ground to cover in the analysis phase. Rushing any of the phases before strategy will ultimately result in a weaker strategy. The worst case being the strategy fails due to inaccurate current information all because you failed to execute recon correctly.
We will need to evaluate the versions of the software used as covered in our recon work, we need to make sure that we fully understand the version number and potential impacts. An example of this would be like Apache 2.1.45 is different than Apache 2.2.3. While both are version 2.x.x, the differences in one version, as opposed to the other, could mean the difference between accurate testing and inaccurate testing. Wasting time on incorrect data has a cost associated with it, your time is valuable, try not to waste it when doing your analysis. It is important to note that there are a variety of ways of researching a specific component of the target when it comes to the software stack, be patient with your analysis, and be thorough.
Everything you find on the internet has to have several services running to make this happen. From the provider to the host, some jumps have to occur for you to be able to access a service. We have DNS, Web Servers, Application Servers, Proxy Servers, CDN Networks, and a variety of other possible services just to make a site or application work. In our recon, we went over some of these and how to get actual information. If you did not take any of the information provided in the previous write up you may have a hard time collecting accurate information. Again, wasting your time with research tasks due to inaccurate information is a cost and time is valuable.
Armed with our accurate information we should now start documented the potential vulnerabilities that we can obtain. CVE Databases are easily found, so are some tools via packetstorm and other resources. These resources can provide us some really easy ways to identify whether the versions of software we plan to analyze are vulnerable. I would also encourage you to find documentation from the potential target if they provide public API access. This documentation could also provide wonderful insights into the inner workings of their application and in some cases source code. This will only help us in setting up our testing and further analysis. The more accurate the reproduction environment we can create, the better our chances at being able to successfully perform an exploit. While researching a particular vulnerability you should also read on comments about what others have encountered when penetration testing with exploits.
There is also a high chance you may encounter some very interesting trolls when working with exploit code as well. An example of this would be a tool to automate a potential vulnerability that may not compile or run without errors, the author may be intentionally doing this to prevent script kiddies from just abusing their findings. You should spend some time learning a programming or scripting languages, it can only benefit you as both a hacker and as a person. Understanding how systems work is great, being able to build out these systems is a way to both garner financial gains in the form of employment, but also a better grasp of how they can be compromised. Social engineering, programming, and resilience will lead you to far more success in the world of hacking than any toolkit can. This is not to say that tools like Metasploit and others are not of value, but they can not replace your actual learning of how systems work and building them.
We now have a solid understanding of the system, the services, and even some of the known or possible exploits against a system. This is a solid start and now we need to take our devices and mimic an environment to get some solid testing without risking disclosing our intents to a potential target. I must reiterate, this information is for educational purposes and I am not responsible for your actions. That being said, let’s go set up our environment on our devices to replicate what we are planning to target. This will include installing an operating system, services like a database, web server, forum, et al. These tasks will seem tedious but they will pay dividends in developing our strategy.
Testing will take a decent amount of time, we should take our collected and verified data and now build a system. Utilizing the system we got off of Craigslist mentioned at the beginning of the year article, we can now begin to reproduce the targeted environment. You should do your best to reproduce a 1:1 when building out your system, this means down to the operating system when possible and even on cloud provider if necessary. You can use your recon data to determine where a system was hosted and even set up your own in that platform, remember your OpSec models when doing this, and avoid KYC when you can. As covered previously on setting up our environment, we should assume that the bare minimum was done first and flesh out our vulnerability testing with even the strictest policies in place. We should also determine at this point what our scope will be. In the context of scope, as in what do we plan to gain from this research and execution.
The scope is critical to define before going into full research and attack mode. An example of this would be do we want data from the database or do we wish to simply take down a service? There are differences in the attack threat modeling dependent on the scope, you would not need to understand most of the system architecture when targeting a WordPress self-hosted site to deface versus wanting to get a database dump from a social media service. While both are “sites” and “services” one has many more moving parts versus the other and this again points back to understanding our scope and purpose of this research to prevent wasting time. Your scope is just that, your scope, you make that choice and you are responsible for that.
As we now have a scope, our test system installed and a 1:1 of system service versions and application stacks, maybe even their application(check Github/Gitlab), we should now begin testing and analyzing. We can review our CVE, exploit resources, and tools available to us to test on our local network. In this testing, you will need to document the behavior of the host you are testing against. This means running system commands such as htop, tcpdump, and monitoring of system logs. Remember, we need to also play both sides of the chessboard in our testing. Vulnerability testing is fun, but we need this data of what the other side sees to refine our strategy and threat models around it. Pressing a button is only part of the equation when testing, we need to always be mindful outside this test environment what surveillance and other tools could be leveraged against us during this. Each testing round should be scoped as well, it is important to limit some time in each of your findings to prevent tunnel vision that often occurs when deep into research mode.
Along with looking for low hanging fruit of even their application code being publicly available, be sure to review the commit history of a project, especially if it is a self-hosted GitLab or git environment. There is often a plethora of valuable information you can gain from this, including where acknowledgment of bugs or even a TODO to fix a version that is in use that should be replaced. Again, this goes back to good recon data and utilizing it. In your testing of your 1:1, experiment with your API calls, experiment with payload sending with curl, or write your client library to make such calls to a particular service. This process will take time but with it will come a better understanding of the application or service target. Another great experiment during this testing is to change the endpoint, maybe they have a subdomain such as *- prod.*.com, so we should also determine if there is a possibility of a *-*$otherenv.*.com this could open up a ton of doors for our experiments. Remember as we are 1:1 testing locally we will not use these endpoints but the documentation could reveal to us a large number of differences in behavior. API documentation for a particular service could pave the exact way forward in your testing, information is power.
As we continue our experimenting, let’s validate the behavior of what is available such as API calls to their environment, system service payload check testing, and in some cases whether or not a key has been exposed via a commit, or what type of hashing is done with the key that is used by the service. This information could prove quite fruitful as we can then determine the hashing used even in the database. Testing of API’s on our local environment when possible thanks to widespread adoption of open source could allow us to find SQL injection vulnerabilities that we can leverage to open up a table or even a database for us to access. Again, the more thorough you are in testing every element the better your data will be for analysis and subsequent strategy building.
During this testing phase with your 1:1 environment, use all the tools at your disposal. It would be naive and wasteful to not leverage all advantages during this time to gain advantages during this time. It could also be beneficial to search password database leaks as oftentimes company’s are reluctant to update their policies for certain divisions. Also, when looking at a target it would prove to be very beneficial to verify email addresses with checks on sites such as haveibeenpwned and others. Again, information gathering, testing, and verification will go a long way. In some cases unexpected findings can make our testing even easier and again, utilize every advantage during this phase as after we analyze our findings and move away from our 1:1 environment, we are no longer going to have the luxury of being on both sides of the chessboard.
As you are testing, document and note all behavior from both sides. This includes when you execute an API call, or send a bad payload, or request information from a service, how does the system respond? What shows up in the logs? Was there any odd behavior like a memory increase or a thread lock event that occurred? These could all become very valuable tools in our scoped testing and with our scoped goal we can again use this information to our advantage while we have this time to do so. Test every CVE, theory, and even craft your own based on the behavior of the system. The more thorough you are the better you can adjust your scope or change your scope as more options may become available. Again, start with everything with very relaxed policies, move forward with stricter and stricter policies, and finish with a proper threat modeled environment. Your finalized testing of an exploit or entry point should work even in the strictest of environments. We can assume a service is probably running almost best practices, or we hope they are. Do not assume everything will be easily exploitable in the wild, this will also prove to be very dangerous to our OpSec.
Lastly, with our testing, you now have a ton of data about what worked and what did not work. You will need to prioritize what aligns with your scope for success and limit time spent on those not aligning with your scope. You may also find yourself in some cases where nothing worked, this is also ok. You should consider other tactics such as social engineering as mentioned in recon or other tactics. Again, you will need to decide these tactics based on your scope and these are your actions. YOU are responsible for YOU.
During the writing of this article, a vulnerability was found on the service I was going to use as an example. At this time I am in disclosure discussions and can not highlight much on it until after they have patched it for ethical reasons. I have not been able to find another suitable service target before publication. The delay of the article was due to rewrite and changes to prevent the vulnerability from becoming public knowledge without time allotment for said service to address. It took weeks to properly do the recon and work and I could not shorten that into a couple of days to add to this. More on this in the future.
Strategy and Planning
After testing and analysis of our testing, we can now strategize the execution as well as the threat model against our plan. A common mistake is not threat modeling against the strategy of attack, this has led to some unexpected results such as target changing their infrastructure, their code, or upgrading to prevent the vectors from existing. Each step along the way you must consider there are counters and in some cases preventative measures that could be taken if the target becomes aware of the potential threat. This is where OpSec and threat modeling culminates into the reality of the situation and are no longer as measures taken as a perceived attack. Once you do execute, all bets are off.
When building a strategy we must consider just as we have numerous times before the where, when, how, and with what. These are critical as each answer to these questions will require a thorough vetting of our entire process. This includes threat modeling every perceivable angle. When writing code, hacking, or building a house you always want to measure a few times and cut once. This is to prevent cost, time loss, and exposure. Rushing to complete something will often result poorly. In the case of hacking it can and will result in you being prosecuted, do not take this lightly.
I would advise you to avoid common mistakes of posting publicly about targeting a service or site as this will be used against you. I would also recommend that you read next week’s article before taking any action to provide you with some further insights on the matter. There are so many facets to all of this that constantly change based on findings and the ever-evolving world we live in. I would advise you to keep tabs on the target as constantly exploits are found and published, in some cases without proper disclosure. Any company with a security team will be working on staying on top of these updates and patches to be applied. Sometimes it could be just a service configuration change to thwart an attack. Be very mindful of these changes, they could create a problem for your strategy.
In this article, we went covered analyzing the data obtained from recon, setting up a test environment, how to scrutinize an exploit, and how those steps build a strategy for success. We now have to do some serious thinking on the matter. Next week we will be covering execution, so before you get wild you got to do some more work before actually following through. Next week’s article will be on fortitude and should be read before taking any actions further than our testing and analysis. Remember, being impatient and hasty in our processes will lead to poor executions and add attack vectors to our operational security.
Rushing to executing exploits in the wild can be very dangerous to you and the target. Most importantly we need to also validate our strategies repeatedly. Threat model against our strategy and when we think we have a well thought out plan, do it again. There is no shame in being methodical in your analysis of your strategy when it comes to security. This also applies to being methodical in your analysis of your systems. As we have analyzed targets, we must analyze our setups as they too are a target from a variety of adversaries in the surveillance world we live in. This surveillance can lead to a variety of outcomes depending on what actions we take. We will cover more of that next week.
As always this information was for educational purposes, I am not responsible for your actions. I encourage you to always continue to learn and always to run threat models against yourself based on what you decide is an adversary. It is important that we not only have good defensive models but an understanding of how offensive strategies are formed and executed. This provides us with a clear path of how an attack can occur and when dealing with a post mortem of an event, how we can react. You should never expect to be immune from drops in operational security, they happen. This applies to systems as well as our own OpSec when striving for anonymity. As the year progresses we will begin to see a larger attack on privacy especially in the United States, use the information to strengthen your understanding and how to keep safe. Help your fellow frens and family as well, they are also targets.
I want to thank JACE for proofreading. I also want to thank 3On for helping with testing and verification of a vulnerability that was found in writing this article, more on that in the future once disclosure and discussions end. Feel free to reach out to me on the following platforms:
Tox: D7D264EA7541C4324625A8360267C3C54F9C1AF564D4266FE4 5F2BCB68924E21CB2A75746D51