Showing posts with label Cyber Security. Show all posts
Showing posts with label Cyber Security. Show all posts

Friday, June 12, 2015

OPINION: Why Security Is Killing Risk Management


   For more than a little while, I have been writing quite a bit about the difference between security and mitigation. In that time, the United States has been riddled with numerous security breaches in both the physical and cyber realms. Whether they were riots over allegations of police brutality or breached firewalls protecting sensitive data, our headlines seem to allude to a failing state of security.
 
   As a professional who is on social media quite a bit, I have witnessed, firsthand the hysteria surrounding these incidents. Every attack seems to be tweeted or blogged about to a point bordering on obsession. To be honest, I could not be more enthralled. Sure, these events are quite insightful for practitioners wherein we learn how to defend against similar attacks in the future or conduct them ourselves. But that’s not what excites me. No. I’m thrilled to see events which demonstrate the connection between the psychology behind security, the illusion of protection it provides, and how our confusion about the differences between security and mitigation has created our current security crisis.

Security vs Mitigation

   In order to understand how security is killing risk management, let’s go over a few key terms. First, as stated before, security is nothing more than a psychological construct to provide us with the assurance that we’ve done everything possible to keep us safe from various threats. Humans are very fearful of their demise and naturally, see threats to their survival as intolerable. Often, this feeling of security comes from repeating “safe” behaviors and providing what we assume are adequate protection measures. This, as we all know, is often based on untested data and the myth wherein victims can think in much the same way as their assailants.
 
   Protection is what we do proactively to deter, deter, delay, and destroy attackers, through mitigation. A great example is an executive protection detail. No successful detail operates on the assumption they can prevent attacks. Everything they do is with respect to the attack happening. This is what makes them very good at what they do and why so many in this field go on to become successful throughout the security industry.

   Security, as we know it, is often done with the mindset victims can prevent attacks. For example, we lock doors because we assume they will deny an adversary entry. What we fail to grasp is that the lock is there to delay the attacker so natural observers or victims can have sufficient time to detect the attack and take action. Many victims enter into a mindset where a locked door is all they require to be safe, without sufficiently comprehending the scope of the adversary’s capabilities and the target’s inadequate mitigation tools. Knowing the difference between security and mitigation is a great start to understanding the importance of risk management over just feeling safe. Heck. It’s the key to it.

The Important and Not-So Subtle Difference Between Threats and Vulnerabilities

   Speaking of risk management, there are a few other terms I think we should cover. Risk management has two fundamental keystones - threats and vulnerabilities. Often, we confuse threats with vulnerabilities in ways we don’t catch always. For example, I’ve seen people react to discovering a vulnerability as being one of the worst security events. This couldn’t be further from the truth. In fact, I find knowing there are areas where a potential bad guy can exploit to enable their attack to be quite insightful. Sure, we like to catch these vulnerabilities before an attack but that’s not always the case. What’s our insurance policy for such attacks? Planning ahead as if it’s already going to happen. What do we call that? Oh, that’s right - mitigation. Threats are merely bad actors who use vulnerabilities to conduct kinetic operations against their targets.

   Sometimes, I feel as if we forget that catching bad guys is the goal of effective protection measures. The threat will come and you should be prepared long before they do. You could plug every hole you can find but ultimately, as I heard throughout my military career, “the enemy gets a vote”. He will find a way in, inevitably, that you will miss. You should plan as though Murphy’s law is actually true. Often, no matter what you do, you may not catch the bad actors. This leaves you with having to take away as much power from the enemy’s punch as possible. Whether you’re reinforcing concrete or hardening firewalls, the premise is the same - if you can’t beat ‘em, make it hard as heck for them by shoring up existing vulnerabilities and anticipating the impending attack.

   Perhaps, two of the most important and misunderstood terms in risk management are probability vs possibility. I see you over there laughing. If you are, then you probably know exactly why this is such a pet-peeve of mine. With every major security event, there’s always someone on social media who declares “the end is nigh”. They begin rattling off how bad the breach was and then end by telling you how bad it’s going to get. Very few times, do you actually receive any sort of mitigation advice. If you’ve been following me since the now-infamous OPM hack, you’ve no doubt heard me prattle about this.

   Most of the consternation about the state of security is centered around our confusion between probability and possibility. This was perfectly illustrated by a not-so recent story about the Islamic State capturing an airbase which had a few MiGs. Immediately, social media erupted with reports and predictions about ISIS flying MiGs very soon. If you know anything about training modern pilots and how the U.S. conducts targeting operations, you know this is not likely to happen. In other words, the probability of MiGs flying over ISIS territory is very small. Sure, it’s possible but not likely. A reality star who isn’t a narcissist is possible but not very probable. This is important to remember because security measures often fail based on how possible something is rather than it’s probability. Countless resources are expended on something that is not likely, while we ignore the threats we encounter daily. Successful security organizations employ measures based on a balance struck between a high probability of attacks happening always and the needs of the end-users.

Protect Yourself By Understanding Your Risks

   Risk management is nothing more than understanding what you have, whether you can lose it, who or what could take it from you, and what it will take to get it back or recover from its loss. In essence, risk management is nothing but acting proactively against a probable threat and ensuring you’re able to protect and if need be, recover from its loss or damage. The problem is, if social media is any indicator, many companies and organizations don’t do this. Again, let’s briefly discuss the OPM hack. I saw the eyeroll. I know we don’t have all the facts. I get that. I digress.

   OPM was allegedly hacked by attackers who stole sensitive data on federal employees. This is, understandably, big news. As it should be. The attackers were able to gain the information by attacking non-patched Department of Interior servers. The information, according to folks formerly in the intelligence community, is extremely valuable counterintelligence information and compromise is completely unacceptable. What’s striking is, as I have noted on Twitter, the servers were connected to the Internet and vulnerable to outside attackers. Yet, neither OPM or the Department of Interior bothered to patch the servers or encrypt their data. They, presumably, thought the threat of attack was minimal and did not require adequate mitigation. Imagine the likelihood of uproar had they just simply encrypted the data they stored. The government did everything I said earlier not to do.

   So what’s the answer? Simply, don’t do security but do mitigation. Being proactive with protecting yourself and your assets doesn’t require hiring Blackwater/Xe to track down Chinese hackers before they strike. No. Tailor your protection to what you will do when the attack occurs, the mission and goal of protection (detect, deter, delay, and destroy attackers), and what it will take to recover from the attack. Balance your measures between the likely or probable threats versus those that are possible but not highly likely. Before venturing off into the great abyss of security’s greatest enablers (fear, uncertainty, and doubt), I implore you to “see the light” and find the “truth” in mitigation through risk management.

Wednesday, March 18, 2015

OPINION: What Mrs. Clinton's Email Problems Can Teach Us About Security


Informal pose of Clinton, 2011
"Msc2011 dett-clinton 0298" by Harald Dettenborn. Licensed under CC BY 3.0 de via Wikimedia Commons.


Over the last two weeks, we’ve been inundated with emails and presidential candidates. I won’t spend a lot of time talking about Mrs. Clinton and what her emails may or may not contain. I could spend an entire series of blog posts on that topic. However, there are some very interesting insights this story gives us into how we perceive what security is, where we feel most secure, and whether any of that makes us more secure.

So let’s begin our discussion: NOTE: I’m no hacker or IT-guru. I’m a guy who blogs and who has a ton of opinions on this stuff. However; I’m also someone who has worked with senior-level persons and I understand how some of these components work. By no means are my opinions facts but merely points to consider.
  1.  Security is about convenience over protection. Remember when I said “security is about peace of mind”. Mrs. Clinton decided it would be more “convenient” to send her emails (personal and professional) through a single server which she owned. Why? She reasoned, because her government-issued Blackberry could not hold more than one email account, it would be better to have a single account. Many users, especially government users, bypass existing security protocols and do and have done exactly this (except many don’t have servers at home). Does this make it right? No. The government has policies in place that state this is bad security practice and a violation of certain public records laws. Yet, ask any security professional how many times they’ve witnessed an end-user potentially compromise protective measures through circumvention out of convenience and you’ll note immediately how much their eyes roll.
  2. Just because a senior-level government official is discussing something doesn’t make it “classified information”. There’s been a massive amount of speculation on the kind of information any investigation would turn up on Mrs. Clinton’s servers. Like I said, I won’t attempt to go into that. However; let’s address why we perceive there would be classified information and whether that’s likely. Sure, being Secretary of State entails having access to some of our nation’s greatest secrets. The job requires it. Some of that information is considered “classified” and others not-so-much. When pieced together with other information or on its own, that material could be extremely sensitive. The very nature of the potential discussions Mrs. Clinton could have had over emails has created a great amount of concern – as it should.

    Senior leaders are given amenities like secure telephone and other communication lines in their homes and offices to facilitate these kinds of sensitive discussions with “cleared” persons. In fact, during ASIS 2014, I had the great honor and privilege to hear Colin Powell speak about his time as Secretary of State. General(r) Powell recalled his final days in Mrs. Clinton’s old job and the day the Diplomatic Security Service agents assigned to him and the information technology staff left his residence while removing his secure lines from his home on his last remaining day. It should be noted Mrs. Clinton should have had the same amenities extended. As such, I’d be curious if the Internet service her server accessed did so via government-installed communication lines or privately-owned and installed lines. I digress, given the nature of what
    could have been discussed; it should give security practitioners who advise senior leaders on protective measures pause.
  3. Government computers and services are NOT intrinsically more secure. These last two weeks, the deluge of speculation, regarding the security of Mrs. Clinton’s email traffic being more secure had it been through government servers, has been extreme. Seriously, have we forgotten about a number of breaches on government email and sites in the past few years? We’ve caught hackers breaking into NASA, NSA, the State Department, and the Pentagon computers. Where most of this confusion comes from is not quite understanding what having your emails on a government server entails versus what a private (or as widely-and-annoyingly termed this week “homebrew”) server does. People hear “government server” and they immediately conjure up an image of a secure server with loads of encryption which would require a team of seasoned hackers to compromise. In fact, while watching a cartoon the other day with my son, a character brags “I hacked NORAD when I was six”. Hacking into these systems should be a big deal because they are. However; they are not invincible to the same kinds of human security failures and poor security mindset when they’re designed and implemented as their commercial partners. 

    Your government email service can easily be hacked in the same way your private email can. Lose your credentials to log into a government computer without noticing and use a password or PIN that is easily guessed. Sadly, that happens more times than many outside of government service would ever be willing to admit. A government server does bring one thing a private server does not always – the full extent of your government agency’s information technology team, most of whom are the best and brightest at what they do and who defeat a variety of threats constantly. In other words, when a breach occurs, this team can respond immediately and use the weight of the government to mitigate the breach. Mrs. Clinton knew this, yet felt
    she and her staff could do better. Given what little we know of what could have been breached or whether a breach even occurred, this could have been true. That being said, it doesn’t make any of her decisions right.
  4. The extent of how much protection to provide sensitive information disclosure is not up to the user but those who have the designated expertise within the organization. Mrs. Clinton, while acting as Secretary of State, had every right to make her own determinations regarding how her agency would protect sensitive information to some extent. That did not provide her with any right to decide, without consultation or coordination with her information security staff at the State Department, to forego policies she enforced on her subordinates. It is highly doubtful Mrs. Clinton would have absolved a junior-level employee who would have been caught in a sensitive information breach from their personal email account. No, we all know she would have directed her staff to punish them. However; the implied arrogance to believe you can enforce one security policy meant to mitigate vulnerabilities and lessen risk while ignoring your own failure in abiding by those very decrees, is striking, to be honest.
  5. Politics obscures our ability to ascertain the more important security issues in crises like these. Mrs. Clinton’s enemies are clamoring to be the first one to hand her an indictment or hold her letter detailing her retirement from politics. While that is unlikely, it seems to be a centerpiece in most political discussions regarding the emails. Most of this is centered around the potential for classified information being on the servers. Again, this is unlikely, given we still have zero clue about what’s on the server.

    What have not been discussed are the very relevant questions pertaining to total protection and mitigation. No one has addressed to any significant degree whether Mrs. Clinton is the only Cabinet member to have done this (she isn’t) and whether she received advice from her designated State Department IT staff (I’m betting she didn’t and relied on her political staff’s IT department who are not government employees). I’d also be curious whether the Secret Service devotes any time to protecting their protectees online as well (doubtful and perhaps an area to pursue). How many people had access to her server? At what level were they cleared? This is important because in order to read unclassified emails which contain “For Official Use Only” you need to be a “cleared” employee. Very few people are asking these questions but should.

Does any of this make us safer now that we know? In some ways? Yes. In others? No. Mrs. Clinton’s emails crisis occurred for a variety of reasons. Many of those were aggravated because she is a political entity at her core. She may have felt as though having a server she owned she was more “secure” from some threats of the political variety. While it is always good to protect yourself from threats, one should not forget the more likely and persistent threats which are present because of the job you hold. She lost sight of that and ultimately forwent some very sound security practices. Then again, she may have well had a number of mitigation measures in-place. Unfortunately, we may never know what they were and thus remain a bit unsure of our protection.

Wednesday, August 7, 2013

Ten OPSEC Lessons Learned From The Good Guys, Bad Guys, and People-in-Between



If you've been in the security world long enough, you've heard of a term called "OPSEC" or operational security. This is a security discipline in which organizations or individual operators conduct their business in a manner that does not jeopardize their true mission. If you're a police officer who is staking out a house, it would be bad OPSEC to sit outside the house in a marked police vehicle. I think it's prudent we discuss this discipline so we can better analyze our own processes by which we protect ourselves and our operations. Reviewing the OPSEC process is a great place to start. The following come from Wikipedia (I know - it's super-scholarly):
  1. Identification of Critical Information: Identifying information needed by an adversary, which focuses the remainder of the OPSEC process on protecting vital information, rather than attempting to protect all classified or sensitive unclassified information.
  2. Analysis of Threats: the research and analysis of intelligence, counterintelligence, and open source information to identify likely adversaries to a planned operation.
  3. Analysis of Vulnerabilities: examining each aspect of the planned operation to identify OPSEC indicators that could reveal critical information and then comparing those indicators with the adversary’s intelligence collection capabilities identified in the previous action.
  4. Assessment of Risk: First, planners analyze the vulnerabilities identified in the previous action and identify possible OPSEC measures for each vulnerability. Second, specific OPSEC measures are selected for execution based upon a risk assessment done by the commander and staff.
  5. Application of Appropriate OPSEC Measures: The command implements the OPSEC measures selected in the assessment of risk action or, in the case of planned future operations and activities, includes the measures in specific OPSEC plans.
  6. Assessment of Insider Knowledge: Assessing and ensuring employees, contractors, and key personnel having access to critical or sensitive information practice and maintain proper OPSEC measures by organizational security elements; whether by open assessment or covert assessment in order to evaluate the information being processed and/or handled on all levels of operatability (employees/mid-level/senior management) and prevent unintended/intentional disclosure.
We should also recognize good guys aren't the only ones who practice this discipline. As a matter of fact, the bad guys do as well and many are quite good at it. The lessons we could learn from them, our fellow security professionals, and others are almost immeasurable.
  1. NEVER trust a big butt and a smile. Yup. I started off with that. Bear with me. Many intelligence agencies and law enforcement organizations use sex as a means to get close to a target or person of interest. Most bad guys realize this. However, many do not to their own detriment. When involved with people in a relationship or sexual encounter, they get very close to you and your secrets. I liken these people to "trusted agents" who you allow close enough to you that can get more information than you're willing or able to share publicly. Poor OPSEC practitioners often forget this. Most of their security failures stem from this fatal flaw. I'm not saying to not be in a relationship or to eschew intimacy. If you're in a job that requires you adhere to sound OPSEC principles, what I'm advising you to do is to exercise due diligence and conduct a risk analysis before you do. Think Marion Barry, Anthony Weiner, and Elliott Spitzer.
  2. Immortal words spoken during an EPIC fail.
  3. Always have a thoroughly vetted back-story for your cover. This is commonly referred to as "legend" in the intelligence community. This is an identity in line with your established, synthetic cover. For example, I previously mentioned the hacker known as the The Jester in a previous blog post. Depending on which side you're on, he's either a bad guy or a good guy. However, the lessons he teaches us about cover are insightful. Whenever someone "doxes" him, he has a prepared and detailed analysis as to how he created that cover identity. Many times he'll use a name that does exist with a person who either does not exist or who he has cleverly manufactured using a multitude of identity generators. He'll use disposable credit cards, email, LinkedIn profiles, VPNs which show logins from his cover location, etc. He even engages in cyber-deception with other actors to establish various cover stories for operations that require them. Whether you like him or not, he's certainly good at one thing we know for sure - cover discipline.
  4. NEVER trust anyone you just met. I see you laughing. Many people mistakenly believe they can and should trust everyone they meet. They will often claim they don't but their behavior says otherwise. As Ronald Reagan is often quoted is saying, "In God we trust, all others we verify" I firmly believe this to be the most crucial aspect of operational security. Proper trust is needed in any environment for the mission to be accomplished. However, blind trust can and will kill any hopes of a successful mission. Whether you're checking identification at an entry control point or planning cybersecurity for an online bank, you should always treat every introduction you don't initiate as suspect. Then triage people and their level of access according to risk acceptance. This is a lesson we learned with Edward Snowden. He'd only been at Booze Hamilton a few months before he began siphoning massive amounts of classified information he had no direct access or need-to-know. Another saying I'm fond of is "Keep your enemies close, but your friends closer." I'm not saying everyone you meet is going to steal from you or betray your trust. Like my momma always says, "Not everyone that smiles at you is your friend and not every frown comes from an enemy."
  5. Shut the hell up! No. Seriously. Shut up. If you hang around the special operations community, you'll hear a term used to describe the work they do as "quiet professionals". Most successful bad guys realize the best way to ensure longevity to shut the hell up. Bragging about or giving "pre-game commentary" before an operation are guaranteed ways to get caught or killed. The truly dangerous people are the one's who never say a word and just do their work. Sometimes, lethality is best expressed with silence.



  6. Watch what you leak. While we can keep our mouths shut, it is more difficult in the information age to keep everything connected to us quiet. In order to properly protect ourselves, we have to begin this process by conducting proper risk analysis. Is what I'm doing right now giving away something I don't want the public to know? Is the the device or medium I'm talking on able to give away information I'm not comfortable with sharing? Does my enemy have the ability to intercept or analyze what I'm doing in order to gain sensitive information? What "tells" am I projecting? These are a few of many questions you should be asking in order to ensure you're limiting "noise litter".

    In the information age, do I need to say more?
  7. If you're doing secret stuff, NEVER EVER EVER EVER EVER, talk on the wire. Look at the Mafia as a perfect example of what not to do. As an OPSEC practitioner, you should never communicate on any medium that can give away your secrets or be intercepted. John Gotti got busted talking on the wire. A person rule of thumb: If it can receive messages, it can transmit messages without you knowing. Treat every computer like an informant - feed it what you're willing to share with your adversary.
  8. NEVER ever touch or be in the same place as the "product". For the uninitiated, that is one of first rules of the dope game. Every successfully, elusive drug dealer knows to keep away from the "product" (read "drugs). Whatever the "product" in your "game", ensure you put enough distance between you and it. If you have to be close to it, then have a good reason to be with it.
  9. Recognize "the lion in the tall grass". When practicing OPSEC, if there is one thing you should never forget is why you're doing it. The reason you're practicing it is simple - there are people out there that oppose you. Ignore them at your detriment.
  10. NEVER say something you can't backup or prove immediately. Nothing says you're a person needing to be checked out better than saying things you can backup or prove. People who are trying to vet you will require you backup what you say for a reason. Be ready for this. A great example of this is demonstrated by people who claim to be connected to someone of stature in order to gain access. In this case, they're found out because the target asked the other party who could not confirm this.
  11. Treat your real intentions and identity as that gold ring from Lord of the Rings. I'm not saying put your driver's license on a necklace so a troll who think it's his "precious" won't take it. First of all, that's too cool to happen in real life. Second, you'll look like an idiot. Finally, there are more practical ways of protecting your identity. For starters, never have anything that connects your identity to your operation. Next, if you have to use your real identity in connection with an operation, give yourself some ability to deny the connection. Lastly, NEVER trust your identity, intentions, or operations to anyone or anything other than yourself.
I've decided to include the more practical list from the "Notorious B.I.G." to drive home some of these principles:

TEN CRACK COMMANDMENTS
  1. Rule number uno, never let no one know
    How much, dough you hold, 'cause you know
    The cheddar breed jealousy 'specially
    If that man *** up, get your *** stuck up
  2. Number two, never let 'em know your next move
    Don't you know Bad Boys move in silence or violence
    Take it from your highness
    I done squeezed mad clips at these cats for they bricks and chips
  3. Number three, never trust nobody
    Your moms'll set that *** up, properly gassed up
    Hoodie to mask up, s***, for that fast buck
    She be layin' in the bushes to light that *** up
  4. Number four, know you heard this before
    Never get high on your own supply
  5. Number five, never sell no *** where you rest at
    I don't care if they want a ounce, tell 'em bounce
  6. Number six, that God*** credit, dig it
    You think a *** head payin' you back, *** forget it
  7. Seven, this rule is so underrated
    Keep your family and business completely separated
    Money and blood don't mix like two *** and no ***
    Find yourself in serious s***
  8. Number eight, never keep no weight on you
    Them cats that squeeze your *** can hold jobs too
  9. Number nine, shoulda been number one to me
    If you ain't gettin' bags stay the f*** from police
    If niggaz think you snitchin' ain't tryin' listen
    They be sittin' in your kitchen, waitin' to start hittin'
  10. Number ten, a strong word called consignment
    Strictly for live men, not for freshmen
    If you ain't got the clientele say hell no
    'Cause they gon' want they money rain, sleet, hail, snow
Don't forget the admonition from Notorious B.IG. gives that should never be diminished:
Follow these rules, you'll have mad bread to break up
If not, twenty-four years, on the wake up
Slug hit your temple, watch your frame shake up
Caretaker did your makeup, when you pass

An information security professional known as "The Grugq" gave a very interesting talk on OPSEC, I think it is worth taking a glance at (try to contain all laughter and bafoonery at the preview image - we're running a family show here, folks):


Thursday, May 23, 2013

INFOGRAPHIC: The Cybercriminal Underground

TrendLabs, a leading information security firm, published this really awesome infographic about the cybercriminal underworld. It's certainly worth a look.

                                                    (click to enlarge)

Monday, April 29, 2013

INFOGRAPHIC: Twacked! When good Twitter accounts go bad.

Given that we do so much communicating via social media about a variety of topics both personal and professional and the permanence of the content we post, it should be no surprise that those social media accounts are being sought out more and more by nefarious parties. The question is what are you doing to protect your account.

                             (click to enlarge)

Saturday, March 16, 2013

VIDEO: Security Threats by the Numbers - Cisco 2013 Annual Security Report


The kind folks at Cisco published their Annual Security Report. What I like about what they did is they chose to publish in a video infographic format. As you can tell, I'm a HUGE fan of infographics. However, if you're a stickler for PDF reports, I'll have a link below the video of the entire report.

Some interesting facts:
  • Global cloud traffic will increase sixfold over the next five years, growing at a rate of 44 percent from 2011 to 2016.
  • Only one in five respondents say their employers do track their online activities on company-owned devices, while 46 percent say their employers do not track activity.
  • 90 percent of IT professionals surveyed say they do indeed have policies that prohibit company-issued devices being used for personal online activity—although 38 percent acknowledge that employees break policy and use devices for personal activities in addition to doing work.
  • Cisco’s research shows significant change in the global landscape for web malware encounters by country in 2012. China, which was second on the list in 2011 for web malware encounters, fell dramatically to sixth position in 2012. Denmark and Sweden now hold the third and fourth spots, respectively. The United States retains the top ranking in 2012, as it did in 2011, with 33 percent of all web malware encounters occurring via websites hosted in the United States.
To read more of the report, click here.

Saturday, February 2, 2013

INFOGRAPHIC: Everything You Ever Wanted To Know About Facebook Security

I found this infographic on Pinterest.com.  Some of this may be old news.  In light of what we know about Twitter's latest data breach, I wonder how Facebook has fared under similar attacks.  If you have any knowledge or even a broad understanding, we would welcome any commentary you might have.








Source: scribd.com via Return on Pinterest

HOW-TO: Make Your Own Faraday Cage


Unbeknownst to many outside the security arena, mobile devices are nothing more than really cool listening devices. In my first few blog posts, many moons ago, we covered how hackers could exploit vulnerabilities inherent with Bluetooth to take control of your phone's microphone.  There is also speculation and evidence that it is now possible to turn both the camera (front and rear) and the microphone to get full video. With GPS, if a hacker gains electronic access to your phone, you have a device even the KGB would envy.  As a security professional, there are times when you need to have conversations without having to worry about eavesdropping.  Standard procedure in most high-security areas is to immediately surrender your phone, in order to prevent electronic eavesdropping.  Devices are then placed in a container to ensure no data is transmitted received to or from the device.  This container is known as a faraday cage.  It blocks the transmission and reception of all electronic devices as well as acts as a shield against electromagnetic pulse attacks.  There are several places you can go online that have faraday bags.  However, I found an article that walks you through constructing your own faraday cage for $15.
Here’s How to Build Your Own with About $15

Supplies

This is probably my most simple DIY project to date. All you need is an aluminum garbage can with a nice and snug lid along with a cardboard box.


Step One: Cut the Cardboard

From the bottom flaps to about the middle of the box you’re going to want to cut some slots about 8 inches wide. This just makes it so that the cardboard can conform easier to the shape of your can.


Step Two: Insulate Can with Box

You’re going to make a tube with your cardboard and slide it into the can. Go ahead and press against the edges of the can to make sure it’s right up against it. That way you have more room inside.



Step Three: Make & Place the Base Insulation

By tracing the bottom of the can on some extra cardboard, you’re going to cut out a circle that will fit in the bottom of your insulation. Then just push it down inside your can. You want this to be a tight fit.



Step Four: Tape the Insulation 

Tape in the creases where the base meets the sides of the insulation. Also tape along the cuts you made in the cardboard. Whatever you put inside of this cannot be touching the metal can – only the cardboard insulation. Taping these weak spots just ensures nothing gets past the cardboard to touch the metal.

Step Five: Trim the Excess

Just go around the edge of your can with a box cutter to cut off the excess cardboard insulation sticking out of the top.



Step Six: Put On Your Lid

Once you’ve put in all of your radios and other gadgets, you’ll just fit on your lid nice and tight.


There are many, many different designs and concepts for homemade faraday cages. This is just one of them. If you happen to find a design that calls for the use of wire mesh instead of solid metal, be sure to get some with the smallest holes you can find. Remember, you want the openings smaller than the electronic waves that will damage your stuff.


Read more: http://saltnprepper.com/faraday-cage/#ixzz2JitUSUi1

Monday, December 31, 2012

Hire Anonymous! - Cyber Threat Summit 2012 by paulcdwyer



Paul C Dwyer President of the ICTTF International Cyber Threat Task Force discusses the concept of identifying talented individuals (hackers) before they seduced into a world of cybercrime. He discussed traits and characteristics in such vulnerable minors such as Aspergers Syndrome and references the case of Gary McKinnon.

Friday, December 28, 2012

About Us