Quantcast
Channel: TechNet Blogs
Viewing all 36188 articles
Browse latest View live

WAP 2016 Published Application Not Working - HTTP Error 503

$
0
0

Imagine the situation.  You just finished deploying AD FS 2016 and Web Application Proxy (WAP) servers in a highly available environment with the AD FS namespace load balanced internally and externally.  There are multiple AD FS servers and WAP servers.  This is an interesting deployment project and all is going well.   After verifying that core AD FS and WAP functionality works as expected you then move onto using WAP to publish Exchange to the Internet using pass through authentication.

Unfortunately no plan survives contact with the enemy....

Instead of being able to see your lovely OWA splash screen when at Starbucks you instead are greeted with the below rather sad page:

WAP 2016 Service Unavailable HTTP Error 503. The service is unavailable.

For make most glorious search engine benefit:

Service Unavailable HTTP Error 503. The service is unavailable.

Hmm.  Maybe OWA is not running on the published Exchange server – let's try ECP instead.

WAP 2016 Service Unavailable HTTP Error 503. The service is unavailable.

Nope same issue.

Internally everything is just fine and all is working as expected.  From the WAP servers themselves, DNS resolves to the correct endpoints.  OWA and ECP can also be rendered as expected on the WAP server.

The issue is only with the external publishing.  Something is wrong with WAP.

Reviewing WAP Configuration

All of the required Exchange CAS namespaces were published using WAP.  Below is the Remote Access management console on server WAP-2016-1.  The OWA published application is highlighted, and then a zoomed view is shown for OWA.

WAP 2016 - List of Published Applications

WAP 2016 - Details of OWA Published Application

We can use the Remote Access Management console to open the properties of the published application, or use PowerShell.  The PowerShell method is shown below.

 

[code language="PowerShell" light="true"]Get-WebApplicationProxyApplication "mail.wingtiptoys.ca/owa" | Format-List[/code]

WAP 2016 Using PowerShell to Review Published Application

All of this looks OK.  The correct certificate is selected and the certificate is valid in all respects.

Since that all seems to be fine, let's review the WAP diagnostics to see what is happening

WAP Troubleshooting

Upon initial inspection it would seem that all is well in the WAP world.  There are no errors logged in:

Applications and Services LogsAD FSAdmin

Applications and Services LogsAD FSAdmin

All of these entries indicate nirvana, and they state:

"The federation server proxy successfully retrieved and updated its configuration from the Federation Service 'sts.wingtiptoys.ca'."

As noted earlier, the idpinitiatedSignon page was working as expected with no issues.  In this case the URL used was:

https://sts.wingtiptoys.ca/adfs/ls/idpinitiatedsignon.htm

idpinitiatedSignon Page Working As Expected

 However, WAP logs to a different event log which is:

Applications and Services LogsMicrosoft-Windows-Web Application Proxy/Admin

When this log is reviewed note that there are errors.

WAP 2016 Application Event Log - Applications and Services LogsMicrosoft-Windows-Web Application Proxy/Admin

Specifically we can see EventID 12019 where there is an error with creating the WAP listener.

WAP EventID 12019

The details of the error are:

Web Application Proxy could not create a listener for the following URL: https://mail.wingtiptoys.ca/owa/.
Cause: The filename, directory name, or volume label syntax is incorrect.
(0x8007007b).

 

Well that would be a problem, no?

 

Addressing WAP 2016 Application Publishing Error

The name that the error is referring to as invalid is highlighted below.

WAP 2016 Publishing Settings

It is quite common to copy the published URL and then paste it into all of the relevant fields.  This is efficient and also prevents making typos.

However, if the URL is pasted into the Name field as shown above you will find yourself in a pickle and probably reading this post.....

 

The issue is due to the "/" invalid character in the name field.  Simply remove the offending special character to address the issue. To do this we can right click the WAP published application, and chose EDIT.

Note in the below example, the Name field was edited and now contains "OWA"

Editing WAP 2016 Published Application To Correct name Field

Complete the wizard to save changes.  Allow for WAP to save and update its configuration.

This could also be done in PowerShell using the Set-WebApplicationProxyApplication cmdlet.  As an example:

[code language="PowerShell" light="true"]Set-WebApplicationProxyApplication -BackendServerUrl "https://mail.wingtiptoys.ca/owa/" -ExternalCertificateThumbprint 'BD4074969105149328DBA6BC8F7F0FFC9509C74F' -ExternalUrl "https://mail.wingtiptoys.ca/owa/" -Name 'OWA' -ID '8D8344E0-52A9-ED1D-692C-81BF039813B5'[/code]

 

This was repeated for all published applications.  Note the highlighted Name column - all of the published applications now have simplified names.

Updated WAP 2016 Published Applications

 

Now it is time to test, and you should be back in business!

OWA Successfully Published Using WAP 2016

 

Note that this issue seems to be specific to WAP 2016, and was not present on WAP 2012 R2.
Cheers,

Rhoderick


Windows 10, 1607, VDI Performance Tuning Guide available

$
0
0

Hello IT Community.  Robert M. Smith, Sr. PFE with Microsoft Premier Services here.  I would like to call out a guide I created and helped get published, on Windows 10 Virtual Desktop Infrastructure (VDI).  This guide is for Windows 10, build 1607.  This guide goes through built-in apps, scheduled tasks, settings, and other optimizations available for VDI environments.  You can find this guide here:

https://docs.microsoft.com/en-us/windows-server/remote/remote-desktop-services/rds-vdi-recommendations

The settings in this guide are called out, but not necessarily "recommended" by Microsoft, one way or the other.  But rather, these settings are called out for customers to evaluate for their environments, to see if the settings may provide performance improvements.  The core idea behind this guide, is that most of the settings by themselves are miniscule.  However, multiply the settings times a hundred settings, and then multiply that times the number of VDI desktops, and there is a potential to reduce compute resource demands on the VDI infrastructure, and hopefully provide better performance to each VDI desktop.

And finally, the recommendations can be applied to regular Windows 10 in most cases.  I hope you find this guide useful.

Sincerely,

Robert M. Smith, Sr. PFE
Microsoft Premier Services

An easy way to measure computer reboot and logon times

$
0
0

Hello It Professionals.  Robert M. Smith here with some information on measuring reboot and logon times using a Sysinternals' tool; Process Monitor.  Process Monitor has evolved over the years from several different tools, into a single, "must have" tool for a wide variety of troubleshooting scenarios.  One of the great capabilities of Process Monitor is its ability to capture a computer trace from very early in the computer startup process.  The information collected from this startup trace can be used to determine how long the computer takes to get to various states, such as:

  •  Logon screen
  • Credentials entered by user
  • Start Explorer.exe (shell)
  • Desktop "ready to use"

I know of at least one other toolkit that performs this; Windows Performance Toolkit (WPT).  WPT is also a "must have" set of tools for tracing a reboot scenario, and many other facets of Windows performance.  The only condition of WPT that lead me to use Process Monitor, is that Process Monitor has a powerful filtering capability that allows a reboot cycle trace to be as small as 8192 bytes (8 KB).  For pure time measurements of reboot and logon timing, the only events that needs to be collected are "process start" events.

Process start events can be used as markers to indicate key events in the Windows startup process.

  • Logonui.exe (logon dialog is presented to use)
  • Userinit.exe (started after a user has entered credentials, is successfully authenticated, and subsequent processes are started for that user to set up their desktop)
  • Explorer.exe (start shell)
  • Process Monitor (the user runs Process Monitor to stop and save the trace, which also indicates the desktop is at or near a "ready to use" state)

Regarding the phrase "desktop ready to use", this is an arbitrary time in the logon process, where Windows has started the majority of services and processes, and has achieved a state where the user can begin opening and interacting with applications.  At "desktop ready to use", the user should be able to open their e-mail client or their business tools and be able to use them without having unnecessary pauses or busy cursors.  This is again an arbitrary state, to be determined by each customer in their environment.  You might have for example a tool that starts up to assist with setting up a default printer.  This could be a marker that the desktop is at or near "ready to use" for the IT consumer.  In this case the user logon time is approximately the time difference between when the 'Userinit.exe' process starts, and the name of your printer assistance tool starts.

There is some consideration that might be given to the fact that a full reboot of the computer can be far more resource intensive than a simple logoff and logon.  In many environments, a full computer restart may trigger security software to perform a new scan of files, memory, and processes, or Windows processes such as AppLocker might perform additional scanning only on a full computer restart, that it may not perform on a simple logoff and log back on.  A full computer resource can skew a logon time after a full restart.  To help offset that to some degree, the test may add a condition where the user does not logon when the logon dialog appears, but instead waits a minute or more to logon.  The same markers listed above can be used to measure user logon time.  By waiting a minute or two, Windows will have completed most if not all, of the starting of services and other automatic system processes that occur on a full computer restart.

Anyway, on to the procedures to capture startup and logon traces using Process Monitor.

Windows Reboot and Logon Analysis Tool

Sysinternals’ Process Monitor

Process Monitor is an advanced monitoring tool for Windows that display and optionally records, real-time file system, Registry and process/thread activity. Process Monitor can run in real-time mode or can be configured to record a boot logging trace.

Process Monitor can be downloaded and copied to a computer being analyzed. There is no formal installation process for Sysinternals tools. If you run the Process Monitor tool interactively, the first time you will be presented the end-user license agreement (EULA). If you accept that, you will not be prompted again to accept the EULA, for that specific user profile. If your profile is deleted or reset, you may have to accept the EULA the next time you run Process Monitor.

If you pin the Process Monitor executable to your taskbar, then during trace capture you will not have to start Explorer or a command prompt to start Process Monitor, you can just click Process Monitor directly from your taskbar.

You can find the Sysinternals’ Process Monitor tool using your favorite Internet search engine, and search for Process Monitor.

Recording a “Reboot Cycle” trace, using the Sysinternals’ Process Monitor

  1. Start Process Monitor (this may require elevation)
  2. The ‘Process Monitor Filter’ interface will be displayed. Click the ‘Reset’ button to reset filters to default values, and then click the ‘Ok’ button.
  3. Click the ‘Capture’ button to stop the current real-time trace, as depicted below:

  1. Before the boot logging is started, some filtering can be enabled that will reduce the captured trace to a very small size, and filter out events that are not needed to analyze reboot and logon.
    1. On the ProcMon icon bar, click to de-select the following categories of events:
      1. Network (name is “Show Network Events”)
      2. Disk and File (name is “Show File System Activity”)
      3. Registry (Show Registry Activity)
    2. Now, click ‘Filter’ from the ProcMon menu, and then click the menu item named ‘Drop Filtered Events’.
  2. In the Process Monitor menu, click ‘Options’, and then click ‘Enable Boot Logging’, which enables the Process Monitor boot logging, until the tracing is stopped.
  3. An option dialog will appear titled ‘Enable Boot Logging’, which offers an option to “generate thread profiling events”. Don’t click any option, just click the ‘Ok’ button to enable boot logging on the subsequent reboot.
  4. After clicking the "Ok" button, close Process Monitor and restart the computer.

NOTE: It is important to log on after the reboot, start Process Monitor, and then stop and save the trace, so that it does not use an excessive amount of disk space.

 

Analyzing a “Reboot Cycle” trace, using the Sysinternals’ Process Monitor

  1. After rebooting the device, logon to the device, and then start Sysinternals’ Process Monitor.
  2. You may be prompted to reset the filter, which you can do, and then click OK.
  3. You will then be prompted to save the current trace. You can click ‘Ok’ and then choose the folder to save the trace to.
  4. Once the trace is saved, the boot trace will be available and displayed in Process Monitor.
  5. Click ‘Tools’ and then click ‘Process Tree’.
  6. Click the very first item in the far-left column, which should be the pseudo-process ‘Idle’, and note the clock time.
  7. Next, in the ‘Process Tree’ display, scroll down while looking in the far-left column for the item ‘Logonui.exe’. When found, click ‘Logonui.exe’, and again note the clock time.

NOTE: the difference between ‘Idle’ clock time and ‘Logonui.exe’ is the time interval from computer startup to the logon credentials dialog box. In this case the difference is 11 seconds.

  • At this point in computer startup, the user entered credentials and pressed ENTER. The next marker in the trace to look for is the process ‘Userinit.exe’. ‘Userinit.exe is the process that is launched if the user’s credentials are verified, and initiates the subsequent chain of events leading to the user’s shell starting, desktop starting, and the important marker “desktop ready to use”.

NOTE: Desktop ready to use is going to be determined by each individual configuration. You are looking for a process starting, at approximately the same time that the computer is responsive enough to allow some other process to start and be used without excessive delay. For example, desktop ready to use could mean that the user is able to start Microsoft Office Word and begin working in Word without delays or excessive sluggishness from the operating system.  The marker you are looking for may be the process name of your anti-virus software being started, or some shell item being started. A Windows 10 inbox example might be ‘Windows Defender tray item’. The point being that the time that ‘MSASCuiL.exe’ (Windows Defender tray item) starts, may be approximately the same time that services and other items have started, and that the OS is responsive enough to user input to be declared “ready to use”. Therefore, this arbitrary marker is termed by me, “desktop ready to use”.

  • The ‘Userinit.exe’ process should be relative close but under’ the previously noted process ‘Logonui.exe. Note the clock time for starting of the ‘Userinit.exe’ process.
  • A good indicator of “desktop ready to use”, in this case, is the starting of the process ‘Procmon.exe’. Recall that after logging on, we started ‘Procmon.exe’ in order to stop and save the boot trace. Note the clock time of starting ‘Procmon.exe’.
  • The difference in clock time between starting of ‘Userinit.exe’ and ‘Procmon.exe’ is roughly that particular user's overall logon time.

NOTE: A reboot/logon trace with default ProcMon filtering may be 3 GB or more. A reboot/logon trace recorded with the filtering options documented in this guide may be as small as 8192 bytes. An additional effect of this filtering is that much less memory and processor resources are needed, and thus the effect of trace capture has far less impact on the overall reboot/logon trace statistics, including clock time.

I hope this information is helpful.

Sincerely,

Robert M. Smith, Sr. PFE
Microsoft Premier Services

Inspire 2017 Microsoft 365 Session Recordings

$
0
0

Following up from the last post on Inspire's EMS session recordings, this post includes the links to the recorded sessions for Microsoft 365 Business and Microsoft 365 Enterprise (formerly known as Secure Productive Enterprise). Note that the sessions are more heavily weighted towards the Business offering as that was unveiled last week.

WIN01 Grow your business with Modern IT

As businesses seek to transform their products, tools, and operations, they need a world class platform built for the digital economy. Windows 10, Office 365, and Microsoft Enterprise Mobility + Security enable IT to deliver cloud-powered modern IT, advanced security, and more productive experiences

Watch Video

WIN05p New, integrated Office 365 and Windows solution for small and midsize businesses delivers more value, and streamlines CSP managed service offerings

Microsoft innovation delivers new value for your small and midsize business customers and fresh opportunities to expand your CSP practice. Learn about the new, comprehensive offering that enables you to help organizations be more productive and less vulnerable to security threats.

Watch Video

WIN16 New, integrated Office 365 and Windows solution for small and midsize businesses delivers more value, and streamlines CSP managed service offerings

Microsoft innovation delivers new value for your small and midsize business customers and fresh opportunities to expand your CSP practice. Learn about the new, comprehensive offering that enables you to help organizations be more productive and less vulnerable to security threats.

Watch Video

OFC01 Extend your portfolio and profit potential with Microsoft 365 Business: a new, integrated solution for small and midsize businesses

Get ready to deliver Microsoft 365 Business for your small and midsize business customers. This offering harnesses the leading capabilities of flagship products in a single solution that enables customers to be more productive while protecting their data on virtually any device.

Watch Video

OFC02 Microsoft 365 Enterprise: a single, trusted solution to grow your managed services practice

Enterprise Mobility + Security—you can deliver one solution that empowers staff productivity while enabling organizations to meet security and compliance mandates. Understand the value and opportunities for your valued-added services.

Watch Video

OFC03 Microsoft 365 Business for small and midsize businesses delivers more value, streamlines CSP managed service offerings

Microsoft innovation delivers new value for your small and midsize business customers and fresh opportunities to expand your CSP practice. Learn about the new, comprehensive offering that enables you to help organizations be more productive and less vulnerable to security threats.

Watch Video

OFC06 Microsoft Workplace Analytics: Deepen engagement, improve productivity, win deals

Office 365 Workplace Analytics transforms digital exhaust into actionable insights that enable managers to maximize their organizations’ time and resources. Discover how Workplace Analytics enhances businesses and helps partners win deals by enhancing their existing solution sets.

Watch Video

PROJECT SERVER 2013 & PROJECT 2013. ACTUALIZACION PUBLICA JULIO 2017.

$
0
0

Hola, buenas,

Podemos descargarnos hace semanas la Actualización Pública para Project 2013 y Project Server 2013, correspondiente al mes de julio de 2017. Debemos tener en cuenta que, para poder instalarla, es necesario haber instalado antes el SP1. Recordemos, por favor, el siguiente criterio de liberación de actualizaciones: las relacionadas con productos de la familia Office que no sean consideradas de seguridad serán liberadas el primer martes de mes; mientras que las de seguridad de productos de la familia Office serán liberados el segundo martes de cada mes:

Paquete “rollup” o completo Project Server 2013: 

https://support.microsoft.com/en-us/help/3213566/july-11-2017-cumulative-update-for-project-server-2013-kb3213566

Paquete individual Project Server 2013:

https://support.microsoft.com/en-us/help/3213577/july-11-2017-update-for-project-server-2013-kb3213577

Paquete cliente Project 2013:

https://support.microsoft.com/en-us/help/3213538/july-11-2017-update-for-project-2013-kb3213538

Hemos consultado los siguientes posts y artículo antes de escribir éste:

https://blogs.technet.microsoft.com/projectsupport/2017/07/12/project-and-project-server-july-2017-updates-released/

https://blogs.technet.microsoft.com/stefan_gossner/2017/07/12/july-2017-cu-for-sharepoint-2013-product-family-is-available-for-download/

https://blogs.technet.microsoft.com/office_sustained_engineering/2017/07/05/july-2017-non-security-office-update-release/

https://blogs.technet.microsoft.com/office_sustained_engineering/2017/07/11/july-2017-office-update-release/

https://support.microsoft.com/en-us/help/4033107/july-11-2017-update-for-microsoft-office

La versión de la base de datos pasa a ser 15.0.4945.1000, y la del cliente 15.0.4945.1000

Por favor, probadlo en un entorno de pruebas antes de hacerlo en el de Producción.

Esperamos os resulte de interés, un saludo.

Jorge Puig

 

6 Tendencias tecnológicas que le dan forma a la transformación digital

$
0
0

La transformación digital, de acuerdo a la definición de Constellation Research, es “la metodología en la cual las organizaciones transforman y crean nuevos modelos y cultura de negocios con tecnologías digitales”.

Y estos nuevos modelos de negocios han comenzado a crear un mercado en el que el ganador se lleva todo, de acuerdo con Ray Wang, fundador de Constellation y analista principal.

En un webinar disponible bajo demanda, Wang comenta que más de la mitad de las compañías de Fortune 500 se han fusionado, han sido adquiridas, se han declarado en bancarrota o han salido de la lista desde el año 2000. Y muchos de estos cambios pueden ser atribuidos, o no, a la creación de los nuevos modelos digitales de negocios.

“Lo que vemos en cada industria” dice Wang, “es que los líderes digitales han conquistado del 40 al 70 por ciento de la cuota de mercado global y del 23 al 57 por ciento de los ingresos. En algunos lugares, si sólo hay uno o dos grandes competidores, han comenzado a llevarse el 77 por ciento de los ingresos”.

“Este es un cambio masivo para el mercado”. Y para sobrevivir y prosperar, Wang menciona, las organizaciones deben transformar sus modelos de negocios para reconocer y dirigir la atención hacia la economía post venta y bajo demanda: en donde lo que sucede después de la venta es tan importante como realizarla; donde los modelos de suscripción son la nueva norma; donde ahorrar tiempo a las personas y llamar su atención crea nuevos líderes.

Esta constante atención hacia la conveniencia e involucramiento no sólo aplica para clientes y empleados, también para socios y proveedores. “Cuando unes estas cosas”, dice Wang, “Es cuando tienes una transformación digital. Es mucho más que un proyecto. Es mucho más que un programa. Es algo que sucede en un estado constante”.

El rol de la tecnología en la transformación digital

Si volvemos a mirar la definición de transformación digital, la metodología en la cual las organizaciones transforman y crean nuevos modelos y cultura de negocios con tecnología, veremos que hay cinco tecnologías digitales que han sido la base de la transformación digital desde los años noventa, comenta Wang. Estas son:

  • Móvil
  • Social
  • La nube
  • Big Data
  • Y comunicaciones unificadas

“A la fecha, la tendencia que ha causado el mayor impacto en la transformación digital ha sido la mudanza hacia la nube, porque representa un acceso democratizado al poder del cómputo”, dice Wang. “Cualquier persona puede tener acceso no solo a la información, al almacenamiento, o a las capacidades de procesamiento, ahora tenemos acceso a toda la tecnología que nos rodea. Cuando tienes un acceso democratizado, existen muchas más oportunidades para crear nuevos modelos de negocio”.

Sin embargo, al mirar hacia adelante, notaremos las siguientes seis tendencias tecnológicas que afectarán al siguiente cambio en la manera en la que mejoramos las experiencias de los clientes, mejoramos lo que hacemos en nuestro lugar de trabajo, transformamos los productos y aprovechamos máquinas a escala para aumentar la humanidad:

  • IoT
  • Impresión en 3D
  • Realidad Aumentada/ Virtual
  • Robótica
  • Tecnología Block Chain
  • Inteligencia Artificial/Cognitiva.

¿Cuál de estas tendrá el mayor impacto en la mayor cantidad de negocios? Wang cree que será la Inteligencia Artificial y la Cognitiva. “Si hablan con Cortana y utilizan conversaciones como un servicio, verán que las recomendaciones surgirán. Las personas hablan con sus teléfonos en la actualidad. Las personas hablan con sus dispositivos en sus hogares.

“Esa pieza es aprendizaje por parte de ustedes, y todas las partes del aprendizaje propio y automático de la IA tendrán una inmensa cantidad de impacto de negocio dentro de cada organización”.

¿Quién será el primero? ¿El modelo de negocio o la tecnología?

“Cuando miramos a la transformación digital, las personas tienden a pensar en la tecnología”, dice Wang, “pero todo se reduce a la manera en la que ustedes cambian sus modelos de negocios y cómo cambian la manera en que se relacionan con las partes interesadas.

“Una vez que tengan el modelo adecuado, podrán descubrir las tecnologías que necesitan apoyar. Y cuando ya hayan juntado esas dos partes, será cuando logren la transformación digital.

(RDS) Tip of the Day: Which graphics virtualization technology is right for you?

$
0
0

Today's tip...

You have a range of options when it comes to enabling graphics rendering in Remote Desktop Services.

In Windows Server 2016, you have two graphics virtualization technologies available with Hyper-V that let you leverage the GPU hardware:

  • Discrete Device Assignment (DDA) - For the highest performance using one or more GPUs dedicated to a VM providing native GPU driver support inside the VM. The density is low because it is limited by the number of physical GPUs available in the server.
  • Remote FX vGPU - For knowledge worker and high-burst GPU scenarios where multiple VMs leverage one or more GPUs through para-virtualization. This solution provides higher user density per server.

The following illustration shows the graphics virtualization options in Windows Server 2016.

Which should you use?

References:

Stupid Little Problem with SNMP Version Tags

$
0
0

I don’t normally get to work with SNMP, but I’ve been at a customer this week where we needed to configure SCOM to do SNMP monitoring.  This is fairly straight forward, and Kevin has written a nice article how to do that here.  I ran into an issue though with regards to removing the version tags.

First, let’s start with a recap of the problem. If a network object is discovered with one version of SNMP, traps generated by a different version of SNMP by the same object will not generate alerts. This is a known issue.  I’m not sure of the logic behind it, but whatever.  The solution is documented. All we need to do is remove the version tag or simply clear the contents of said version tag.  Again, this is straight forward and documented.

My discovery though is that if by chance one were to choose to edit the SNMP monitor or rule, SCOM is so kind as to decide to add the contents of the version string back into the unsealed MP. It does this whether you remove the tag or simply clear its contents.  What this means is that if you ever need to go back and edit your custom SNMP Monitor or Rule, you need to do this task again.


eDiscovery and Content Search in Microsoft Teams

$
0
0

Introduction:

When a new Team in Microsoft Teams is created, it automatically creates an Office 365 Group and because Office 365 Groups exist within Office 365, they can be subject to Security and Compliance policies in Office 365. In addition content posted in Microsoft Teams, can also be subject to these policies and enables organizations to perform Content Search and eDiscovery on this stored content. This article will walk an administrator through how to perform this task.

Disclaimer: This article is not an extensive nor exhaustive "how to" for eDiscovery in Office 365. The purpose of this article is to demonstrate the simplicity of performing eDiscovery on Microsoft Teams content but we will not go in-depth into the process.

Environment Setup:

Within Microsoft Teams, I have created some content in the conversation of a Team called Finance Auditors Team. This content will pertain to a confidential company project of Contoso's that we will refer to as "Project Lunch". In addition, two files have been created under the Files tab of the Team; "Project Status Report" and "Project Plan".


Step 1: Create a new eDiscovery Case in the Office 365 Security & Compliance Center

Browse to the Office 365 Security & Compliance Center at www.protection.office.com. On the left pane, expand Search & Investigation and click eDiscovery

Click the button Create a case. In the flyout on the right side, give the case a name and a brief discription then click Save.

Step 2: Configure & Run the eDiscovery Case

On the eDiscovery screen, click Open next to the case you just created

On the new window that opens for the case details, click the Search tab

Click the + (plus) sign to launch a new window to configure the keyword search. In the details, give the search a name and configured the searching locations. For my example, I will select Search Everywhere then click Next

In the What do you want us to look for step, enter a keyword. For my example I will enter Project Lunch and then click Search. Note the dialog box will close and the search will immediately start to execute.

Note The dialog box will close and the search will immediately start to execute. This process may take a few moments to run.

 

Step 3: Review the results

Once the search is finished running, click the hyperlink Preview Search Results (Note: A new window called "Preview Search Results" will launch, and you may be prompted to authenticate).

Within the Preview Search Results window, you will notice on the left pane the search results where the keyword "Project Lunch" appears. In this example, Project Lunch was returned in a PowerPoint, Word document, and two IM conversations (Microsoft Teams).

Important: All the items in the search results, were in the Finance Auditors Team within Microsoft Teams

Clicking on an item in the left pane, will display the detailed results on the right pane. Notice you can click Download Original Item and it will allow you to download the original document where the keyword was discovered. In this example, a Word Document (docx).

I'm going to click on the IM item titled Finance Auditors Team/1500489998445. This will display the message on the right pane and enable you to also download the original conversation. Note, Microsoft Teams conversations will appear as IM type when doing the content search.

Conclusion:

At this point, further actions can be taken to export the content or used Advanced eDiscovery for preparing a more detailed search. Stay tuned as I will continue to write future articles on additional Security & Compliance topics for Microsoft Teams!

--Matt Soseman


 

Good news everyone! We are under brute force attack!

$
0
0

The title is a tribute to Professor Farnsworth... I mentioned it because my jokes usually never land... And just to make it clear, this post is not a guidance on what to do in case of brute force attacks (bummer... eh?), it is a just testimony of my recent experience with the topic and how I leveraged Windows built-in features, free Microsoft products and Azure services to sort it out.

So, it is a quiet Friday afternoon and I feel like testing the free subscription of Operations Management Suite. I was curious to see how much data the security logs of my 5 machines would take in one week. So I deployed the agents and... Saved by the weekend bell, I kinda forgot about it.

The week after I connected to the portal, and I saw something like this:

At first I was like 20K successful authentications? Hum... Ok, I have only 10 users in the lab but I am running a s###load of scripts. So why not. But then: 23K authentication failures! And that I couldn't explain it. So I look at the detailed reports that the Audit and Security OMS package is provided by default:

It is clear that it is not me. And I was even wondering if one of my account might be compromised by now. But not really... Brute force attacks with my passwords is very unlikely to succeed. So what are those? Looking at the details, they all hit the same machine...

This brings me back to a presentation I have delivered few weeks before. I needed to connect to my lab using RDP but the only port allowed in the premises was TCP 443. So I changed the random RDP port I was using to 443. And voila! I did my demo and everybody was happy. By doing this I actually exposed by lab to an army of RDP brute force zombies. I looked at my machine and I indeed see the thousands of random attempts, all generating 4625 events in my security logs:

The problem with the 4625 coming from a failed authentication on a RDP connection is that you don't have the source IP address:

So I enabled the firewall logs of my machine like this:

And I was indeed seeing an IP getting crazy on the logs (the log shows port TCP 3389, my port 443 publication is in fact a done at the perimeter by a NAT/PAT device):

2017-07-09 12:45:29 ALLOW TCP 89.163.148.77 10.10.0.6 22556 3389 0 - 0 0 0 - - - RECEIVE
2017-07-09 12:45:31 ALLOW TCP 89.163.148.77 10.10.0.6 42610 3389 0 - 0 0 0 - - - RECEIVE
2017-07-09 12:45:33 ALLOW TCP 89.163.148.77 10.10.0.6 26244 3389 0 - 0 0 0 - - - RECEIVE
2017-07-09 12:45:37 ALLOW TCP 89.163.148.77 10.10.0.6 29731 3389 0 - 0 0 0 - - - RECEIVE
2017-07-09 12:45:39 ALLOW TCP 89.163.148.77 10.10.0.6 46383 3389 0 - 0 0 0 - - - RECEIVE
2017-07-09 12:45:41 ALLOW TCP 89.163.148.77 10.10.0.6 33357 3389 0 - 0 0 0 - - - RECEIVE
2017-07-09 12:45:45 ALLOW TCP 89.163.148.77 10.10.0.6 37050 3389 0 - 0 0 0 - - - RECEIVE
2017-07-09 12:45:46 ALLOW TCP 89.163.148.77 10.10.0.6 50106 3389 0 - 0 0 0 - - - RECEIVE
2017-07-09 12:45:49 ALLOW TCP 89.163.148.77 10.10.0.6 40662 3389 0 - 0 0 0 - - - RECEIVE

Even a quick search on my favorite search engine revealed very quickly that this IP was already well-known for RDP brute force attacks. So I create a rule to block at on my host firewall:

Quick look at the firewall logs:

2017-07-09 12:45:49 ALLOW TCP 89.163.148.77 10.10.0.6 40662 3389 0 - 0 0 0 - - - RECEIVE
2017-07-09 12:48:51 DROP TCP 89.163.148.77 10.10.0.6 40662 3389 0 - 0 0 0 - - - RECEIVE
2017-07-09 12:48:55 DROP TCP 89.163.148.77 10.10.0.6 40662 3389 0 - 0 0 0 - - - RECEIVE

Cool. It's getting dropped. That was fun, but still, next time I want to be aware of this way quicker that this! And not figure out just by checking the OMS portal because I was bored. So let's create an alert in OMS, and next time I will know right away! Very simple logic: every 5 minutes I check for the number of event 4625 for the last 5 minutes. And if it is more than 10, I send an email:

The story could end here but later in the week... This happened:

LOL - Well I kinda asked for it. So being notified is one thing... But I want it to stop too! So let's take it up a notch. Let's have the following:

  1. Create a scheduled task which runs every 5 minutes on domain controllers which check on many event 4625 I have on the server for the last 5 minutes.
  2. If it is more than 10, then look at the firewall log to see if it is the same IPs all the time.
  3. If it is the same IP, then add it automatically to the firewall list.

So the scheduled task is a no challenge. I spare the details on it (unless you want me to elaborate).

The script is very basic, again this is not for you to copy/paste and implement in your environment, it is just sharing my rough ideas:

[code language="PowerShell"]</pre>
#Create some sort of logs
$_brute_log = "C:BruteLog.txt"
Function WriteLog($__msg) {
$__ts = (Get-Date).ToString("yyyy/MM/dd HH:mm:ss")
"$($__ts) - $($__msg)" | Out-File $_brute_log -Append
}
WriteLog "NEW ENTRY _______________"
#Filter to look for all the 4625 for the last 5 minutes
$filter = @"
<QueryList>
<Query Id="0" Path="Security">
<Select Path="Security">*[System[Provider[@Name='Microsoft-Windows-Security-Auditing'] and Task = 12544 and (EventID=4625) and TimeCreated[timediff(@SystemTime) <= 300000]]]</Select>
</Query>
</QueryList>
"@
#Calculate the time the query take... That's for the sake it is fine as long as it takes less than 5 minutes.
$query_start = Get-Date
$querylogs = Get-WinEvent -ComputerName localhost -FilterXml $filter -ErrorAction SilentlyContinue
$query_stop = Get-Date
#Spit some logs
WriteLog "Execution time: $($query_stop - $query_start)"
WriteLog "Result Count:  $($querylogs.Count)"
#If it is highetr than 15, something fishy it happening
If ( $querylogs.Count -ge 15 ) {
#We parse the firewall logs (only the last 5000 lines)
Get-Content C:Windowssystem32LogFilesFirewallpfirewall.log -Tail 5000 | `
Where-Object { $_ -like "*ALLOW*3389*"} | `
ForEach-Object { $_.Split(" ")[4]} | `
Group-Object | `
Sort-Object Count -Descending | `
ForEach-Object {
$IP = $_.Name
$count = $_.Count
#We don't care about local IPs
If ( $IP -like "10.*" ) {
WriteLog "Local IP are skipped"
} Else {
#It's not local, let's see how many attempt I have if it is more than 10 I don't like it
If ( $count -gt 10 ) {
WriteLog "$IP doesn't look good with $count hits"
$blockrule = Get-NetFirewallRule -DisplayName "Block Brute Force"
[array] $currentlist = (Get-NetFirewallAddressFilter -AssociatedNetFirewallRule $blockrule).RemoteAddress
#Check if the IP is already on the block list
If ( $currentlist -contains $IP ) {
WriteLog "$IP is already blocked"
} ELse {
#If not, we add it to the list
WriteLog "$IP Will be added to the filter"
$currentlist += $IP
Get-NetFirewallAddressFilter -AssociatedNetFirewallRule $blockrule | Set-NetFirewallAddressFilter -RemoteAddress $currentlist
WriteLog "$IP added to the list"
#Let's leave a message for OMS to pick up
EVENTCREATE /L APPLICATION /T ERROR /ID 666 /D "$IP had $count hits, it has been added to the Block Brute Force firewall rule"
}

} Else {
WriteLog "$IP has only $count hits"
}
}
}
} Else {
WriteLog "Not enough 4625, looking good here"
}
<pre>[/code]

Notice the EVENTCREATE at the end? (well I know I could have done it in PowerShell...) I plant an event that I will collect with OMS and alter on:

And this is the result in my mailbox:

And the current state of my firewall rule scope:

Anyhow, that was fun 🤓 Oh wait, what about looking at where the different connection attempts I have are coming from, since I block only the most stupid attacks? Let's do some PowerBI to parse the firewall logs:

Now that is cool 🙂

Istio CI/CD pipeline for VSTS

$
0
0

I wrote sample code for Istio. Also, I configure CI / CD pipeline for VSTS enabling Blue Green Deployment and Canary for Kuberenetes.
For the installation of the Istio on Azure, you can refer this post. It works fine for me. Then you can set up the istio on the top of your kubernetes cluster.

This is the pipeline which enable us to CI/CD on istio. You can get the sample code on the GitHub. However, I'll change some
for enabling automation.

Build Pipeline

Build an image

You build docker image for this config. I'm using Azure Container Registry. Using VSTS, you can just select your subscription, Azure Container Registry. Nothing to difficult for this part. One thing you need to care is, you need to use Hosted Linux agent. You can add "latest" tag option for make it latest image.

On this config, your docker image name will be kube166.azurecr.io/webservice:$(BUILD_BUILDID).

$(BUILD_BUILDID) will be replaced by the VSTS to a number like 196. It will be a version number of the docker image.

Push the image

After you create the image, you need to push it to Azure Container Registry. Not so different from the build the image step.

Push the Artifact

Then push the artifact.  On the path to publish column, you can specify any file. The file means nothing. However, if you use this task, you can link the CI pipeline to the Release pipeline.

You can add testing / coverage feature on this pipeline. For example, I show you other pipeline with executing testing. If you run the docker, I recommend executing the test inside the container. Because we can use the exactly the same binary for this testing as the production deployment.

 

Release pipeline

The point is, let's use Hosted Linux Agent. It makes easy to build/deliver to kubernetes/istio environment. Also, This pipeline has two Artifact. The fist one is the CI Pipeline, which we already configured. The second is the Git or GitHub repository which has kubernetes/istio yaml file.

The Git/GitHub repository has the  webservice.yaml file like this. The point is , you can see the #{LAST_BUILD_BUILDID}# and #{BUILD_BUILDID}# .  These string are replaced by the replace token task. LAST_BUILD_BUILDID is the previous deployed docker image version. BUILD_BUILD is the current docker image version which you can get from the CI pipeline if you link the artifact.

Please compare the file with the original version.

I share some code for configuring istio and services. You need to create secret on your kubernetes before creating this pipeline.

webservice.yaml

apiVersion: v1
kind: Service
metadata:
 name: web-service
 labels:
 app: web-service
spec:
 selector:
 app: web-service
 ports:
 - port: 80
 name: http
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
 name: web-deployment-#{LAST_BUILD_BUILDID}#
spec:
 replicas: 1
 template:
 metadata:
 labels:
 app: web-service
 version: #{LAST_BUILD_BUILDID}#
 spec:
 containers:
 - name: web-service
 image: kube166.azurecr.io/webservice:#{LAST_BUILD_BUILDID}#
 ports:
 - containerPort: 80
 imagePullSecrets:
 - name: kb166acrsecret
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
 name: web-deployment-#{LAST_BUILD_BUILDID}#
spec:
 replicas: 1
 template:
 metadata:
 labels:
 app: web-service
 version: #{BUILD_BUILDID}#
 spec:
 containers:
 - name: web-service
 image: kube166.azurecr.io/webservice:#{BUILD_BUILDID}#
 ports:
 - containerPort: 80
 imagePullSecrets:
 - name: kb166acrsecret
---
apiVersion: v1
kind: Service
metadata:
 name: web-front
 labels:
 app: web-front
spec:
 selector:
 app: web-front
 ports:
 - port: 80
 name: http
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
 name: web-front
spec:
 replicas: 1
 template:
 metadata:
 labels:
 app: web-front
 version: 1.0.0
 spec:
 containers:
 - name: web-front
 image: kube16.azurecr.io/webfront:1.0.3
 env:
 - name: SERVICE_URL
 value: "http://web-service"
 ports:
 - containerPort: 80
 imagePullSecrets:
 - name: kb16acrsecret
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
 name: webservice-ingress
 annotations:
 kubernetes.io/ingress.class: istio
spec:
 rules:
 - http:
 paths:
 - backend:
 serviceName: web-front
 servicePort: 80

 

Also, you need to change other yaml file for the istio routing.  Please compare with the original one.

all-v1.yaml

type: route-rule
name: web-service-default
namespace: default
spec:
 destination: web-service.default.svc.cluster.local
 precedence: 1
 route:
 - tags:
 version: "#{LAST_BUILD_BUILDID}#"

 

all-v2.yaml

type: route-rule
name: web-service-default
namespace: default
spec:
  destination: web-service.default.svc.cluster.local
  precedence: 1
  route:
  - tags:
      version: "#{BUILD_BUILDID}#"
    weight: 100

 

canaly.yaml

 

type: route-rule
name: web-service-default
namespace: default
spec:
 destination: web-service.default.svc.cluster.local
 precedence: 1
 route:
 - tags:
 version: "#{LAST_BUILD_BUILDID}#"
 weight: 70
 - tags:
 version: "#{BUILD_BUILDID}#"
 weight: 30

 

test-v2.yaml

 

type: route-rule
name: web-service-test-v2
spec:
 destination: web-service.default.svc.cluster.local
 precedence: 2
 match:
 httpHeaders:
 cookie:
 regex: "^(.*?;)?(NAME=v2tester)(;.*)?$"
 route:
 - tags:
 version: "#{BUILD_BUILDID}#"

Now you can ready for configuring the istio pipeline.

Blue production pipeline

Don't forget to use Hosted Linux Preview. 🙂

This is the brand-new unofficial kubernetes task. You can find it on the Market place. The github is here. You can download and configure the kubectl, istioctl, and helm. NOTE: please specify the kubernetes donwload version with v in front of the version number. Sorry for the confusion. You don't need it for istio and helm. This task download the binaries and set PATH and KUBECONFIG environment variables. You can add kubernets config file via k8s end point.  You can refer the document.

This task is the brand new task. Last Version. It enable us to manage the previous version number. BUILD_BUILDID will be your current version number. However, if you want to configure blue green deployment, you need the previous version number. This task is very simple and the first version. However, it can help you to manage the previous version.

If you use this task, you can do two things. One is getting the previous version from a storage account and set the version number to the Environment Variable which you will specify on 2. The other is record the current version as the latest to the storage account. You need to create a storage account in advance. Then get the connection string. You can paste it on 1. You need to specify some Environment Variables which you want to use it for version number of Docker image. this time is 2 BUILD_BUILDID.

I don't upload it to the market place until now. However, you can get it from my github and build upload to your repo with tfx-client.

for the detail of tfx-cli please refer this page.

tfx build tasks upload --task-path ./lastversion

 

Then you need to replace the variables which you defined at the yaml files. This task enable you to replace #{ENV}# to the value.

Using shell exec task, you can execute any shell command. In this case, I execute istioctl kube-inject command. Generally speaking, VSTS task can't handle pipe. I store the result in to the file.

Then you can Deploy to your k8s cluster. This task is kubectl exec command. It is included in my unofficial Kuberntes task.

The kubernetes taks inlucde istio task. You can configure the routing. like this.

Delete the route

Then create a new routing. This route everyone to the previous version.

Add the routing for testers. test-v2.yaml file route new version only for tester who has cookie of NAME=v2tester. Please refer the test-v2.yaml file which I put on the previous section. Now only the tester can access the new version.

Canary Production

This is the canary release. The downloader and Last version, replace task is the same as the previous. Only difference is the route rule for canary testing. This rule route some people to previous version, some people to new version.

Green production

finally, you delete the tester's route then, route all user to the new version.

 

Finally, I store the current version as the latest version to the storage account.

 

That's it. I hope you enjoy istio CI/CD with VSTS.

 

 

Revolutionize your inventory management and improve your customer service with TraknProtect

$
0
0

TraknProtect is a Chicago-area startup developing software solutions that aim to revolutionize the way hotels track and manage their inventory. TranknProtect’s flagship application is a real-time inventory tracking and analytics platform for hotels and resorts that instantly transforms rudimentary inventory filing systems into real-time, automated tracking systems. With TraknProtect, your staff can now locate any piece of inventory - cribs, rollaway beds, tables, projectors, etc. – instantaneously, allowing them to respond to and fulfill any guest request for additional inventory in a fraction of the time. TranknProtect uses Bluetooth low energy (BLE) trackers and an easy-to-use proprietary app to track equipment so that your staff can instantly locate any piece of equipment, without undergoing any additional training. Plus, the app provides you with an analytical evaluation of your hotel’s inventory usage so that you can forecast future inventory needs – saving you valuable resources that are often wasted on purchasing unneeded inventory.

According to Founder and CEO Parminder Batra, TraknProtect functions to make any piece of hotel equipment “smart”, by providing hotel employees with its location in real-time. "Traditionally, when a guest requests an additional item like a crib, a member of the housekeeping staff receives the request, and has to comb through every possible storage area to find the item. This process takes time and can lead to dissatisfied guests and loss of future business,” Batra says. “By expanding TraknProtect’s ability to integrate with leading hospitality service solutions, we can help any hotel become more organized and efficient, while also improving their customer service.”

Although TraknProtect was initially conceived as a side project of Batra’s, the company has been an instant hit throughout the hospitality industry, securing contracts with many of the most prestigious hotels in the United States including: Hyatt McCormick Place, Hyatt Regency Chicago, the Clarion Inn, and Grand Hyatt New York. TraknProtect has also been accepted into the Travel Startups Incubator, which will help the company expand their client network and secure additional funding. Batra and her team are very pleased by the rapid success of TraknProtect and are looking forward to expanding into new markets, and new industries, in the coming months. The company is currently eyeing entry into the air travel industry and has engaged in talks with several large airline providers.

Boasting a highly-experienced developer team, TraknProtect decided to build their solution using the Azure stack because of its speed, reliability, and most importantly, scalability. In addition to basic hosting, the app utilizes a variety of Azure services including: Visual Studio, Web Apps, Virtual Machines, and Power BI. TraknProtect uses a client and admin portal running on an Azure Web Application and all data collected by IoT hubs are sent directly to an Azure VM Server. The solution utilizes several Virtual Machines – running in both availability sets and standalone, to consume and store data. The app also leverages Visual Studio to quickly develop and edit code, and is currently working to incorporate Power BI to improve the solution’s reporting platform.

“Using Azure has enabled us to rapidly scale as we grow our client base, while maintaining high availability and reliability,” Batra says. “Plus, templates to create Virtual Machines, availability sets, and clusters have been very handy in allowing us to quickly get our infrastructure up and running. Azure definitely decreased our to-market time and allows us to efficiently improve our solution to meet any need, no matter how unexpected.”

Batra also states that Azure’s ability to seamlessly incorporate an extensive library of open source material was instrumental in her company’s decision to use Microsoft technologies. TraknProtect’s backend solution leverages MySQL using a Master/Slave VM, Redis Cache, and source code that was developed using PHP. “Our solution was built using a lot of open source, and we knew that it was essential to choose a technology stack that could fully integrate anything we threw at it. Azure has allowed us to successfully build our ideal app at low cost. We are very happy that we decided to leverage Azure and would recommend it to anyone.”

+++++++++++++++++++++++++

Microsoft is helping these startups succeed through its BizSpark program. To join or see other startup stories, visit us at our website here. To listen to our startups, check out these podcasts on devradio here

About BizSpark:  Microsoft BizSpark is a global program that helps startups succeed by giving free access to Microsoft Azure cloud services, software and support. BizSpark members receive up to $750 per month of free Microsoft Azure cloud services for 3 years: that’s $150 per month each for up to 5 developers. Azure works with Linux and open-source technologies such as Ruby, Python, Java and PHP. BizSpark is available to startups that are privately held, less than 5-years-old and earn less than $1M in annual revenue.

最新 Surface をいち早くご紹介 ! Surface Readiness Roadshow 開催【7/20 更新】

$
0
0

 

新製品 Surface Pro, Surface Studio のパートナー様向けトレーニングを開催いたします。

実機をご用意しておりますので、いち早く新しいSurfaceをご体験ください。

 

■アジェンダ■

本トレーニングは90分を予定しています。

  1. Surface 新製品のご紹介
  2. Surface 導入実績や提案シナリオ、働き方改革の実現
  3. パートナー向けプログラム・キャンペーン
  4. 質疑応答 & 実機体験

 

■開催場所・開催日時■

「参加登録」リンクよりお申し込みください。

7月25日(火)15:30 - 17:00 (受付開始 15:00~) [福岡会場] 参加登録
7月27日(木)15:30 - 17:00 (受付開始 15:00~) [大阪会場] 参加登録
7月31日(月)15:30 - 17:00 (受付開始 15:00~) [名古屋会場] 参加登録
8月2日(水)15:30 - 17:00 (受付開始 15:00~) [東京会場] 参加登録

 

▼ 資料のダウンロードはこちら   

 

Outlook 2016 で開封確認を要求するメッセージを受信した際に、開封済であることを送信者に返信しても送信者に届かない現象

$
0
0

日本マイクロソフト Outlook サポート チームです。

Blog では Outlook 2016 で開封確認を要求するメッセージを受信した際に、開封済であることを送信者に返信しても送信者に届かない現象の動作について説明します。

 

機能について

Outlook は送信したメッセージを受信者が開封したかどうかを確認できるよう、メッセージ送信時、受信時に以下の機能に対応しています。

 

  • メッセージ送信時: 受信者がメッセージを読んだことを確認する開封確認メッセージを要求する機能。
    メッセージ ウィンドウのメニューの [オプションタブ]-[開封確認の要求] をチェック オンでメールを送信できます。

 

  • メッセージ受信時: 開封確認メッセージを要求するメッセージを閲覧した際に開封確認メッセージの返答を送信する機能(返答しない設定も可能です)。
    Outlook ウィンドウのメニューの [ファイルタブ]-[オプション]-[メール] をクリックして、画面右側の [確認] のオプションで既定の設定を変更可能です。
    既定の設定では [開封済みメッセージを送信するかどうかを毎回確認する] となりますので、開封確認付きメッセージを閲覧する際に、返信するかしないかのメッセージが表示されます。

 

この機能を開封確認通知と呼びます。

 

現象

Outlook 2016 で開封確認を要求するメッセージを受信し、開封確認メッセージを元のメッセージの送信者に返信しても、送信者に開封確認メッセージが届かない場合があります。

 

現象発生条件

送信経路上に Envelope-From NULL のメッセージを廃棄するサーバーが存在する場合、Outlook 2016 で開封確認メッセージを返信すると発生します。

 

現象の動作説明

Outlook 2016 の開封確認通知の実装は、インターネットの開封確認通知の仕様を規定している RFC 3798 に準拠しています。

RFC 3798 では開封通知の差出人である Envelope-From (SMTP セッションの MAIL FROM) NULL でなければならないと定義されています。

Outlook 2016 はこの定義に基づき開封確認メッセージを送信する際は MAIL FROM NULL で送信する動作となっており、本動作は Outlook 2016 の仕様となります。

本動作により、例えば、経路上の SMTP サーバーで Envelope-From NULL のメールの送信または受信を許可していないといった動作仕様の場合、開封確認メッセージを送信しても、元のメールの送信者に開封確認メッセージが到着しないといった現象が発生します。

 

対処方法

Outlook 2016 では、開封確認メッセージを送信する際に MAIL FROM にアドレスを指定して送信するオプションや、メール送信時のメールヘッダーで Envelope-From を追加指定するような機能は実装していません。

したがいまして、メールサーバー側での対応が必要となります。 

 

本情報の内容 (添付文書、リンク先などを含む) は、作成日時点でのものであり、予告なく変更される場合があります。

Forza Garage オープン。『Forza Motorsport 7』収録車種のうち 167 台を公開

$
0
0

『Forza Motorsport 7』に収録される 700 車種以上のクルマのうち、2014 Porsche 918 Spyder、2011 Lamborghini Sesto Elemento、1967 Ferrari #23 Ferrari Spa 330 P4 を含む 167 台を ForzaMotorsport.net で公開。10 月 3 日の発売に向け、収録車種リストを毎週更新。

forza motorsport 7
forza motorsport 7

Forza Motorsport 7 収録車種公開ページ – Week 1 (ForzaMotorsport.net / 英語のみ)

Forza Motorsport 7 製品ページ

関連情報:

.none{display:none;}
.image{
margin-bottom: 2em;
}
.row lineup{
padding-bottom: 2px;
}
body {
font-size: 16px;
line-height: 1.5em;
margin-bottom: 2em;
}
h3 {
font-size: 1.5em;
font-weight: bold;
padding: .25em 0 .5em .75em;
border-left: 6px solid #107C10;
border-bottom: 1px solid #ccc;
}

Microsoft Premier Workshop: DevOps People and Process

$
0
0

Beschreibung
DevOps ist eine kulturelle Denkweise, die kontinuierliche Verbesserung mit einem Fokus auf Prozesse und Praktiken schätzt und ermöglicht Cross-funktionale Teams Qualitätssoftware mit Effizienz und Geschwindigkeit zu liefern. Dieser Workshop stellt DevOps in jedem Teil der Organisation vor und ermöglicht die korrekte Umsetzung von DevOps. Die Teilnehmer lernen die Fullstack von DevOps und erfahren, wie sie ihre Organisationskultur und Lieferprozess transformieren können, um DevOps Prinzipien und Praktiken anzunehmen. Dieser Workshop konzentriert sich nicht auf einem bestimmten Software-Delivery-Tool sondern veranschaulicht die Konzepte mit einer Mischung aus Folien, Geschichten und agile Spielen.

Hierzu möchten wir sie herzlich einladen.

Agenda
Module 1: Understanding DevOps
Introduces DevOps and its origins. Relates DevOps to Lean, Agile, Toyota Production System (TPS) processes and talks about key concepts.
Module 2: Project vs. Product
Compares product development to projects and illustrates how traditional projects prevent the full adoption of DevOps.
Module 3: Value Streams
Introduces value streams and all the activities required to transform a customer request into a good or a service. Value streams are part of every Lean transformation and a proven approach to visualizing and resolving disconnects, redundancies, and gaps in a value delivery system. This chapter focuses on enhancing the full organization and introduces quality metrics.
Module 4: 3M Model
Covers all aspects of waste. One of the key concepts in Lean and TPS is the identification of steps that add value and steps that don’t add value. By sorting all process activities into these two categories, it is possible to start actions for improving the delivery process.
Module 5: Team
There's no such thing as a "DevOps Team". DevOps proposes strategies to create better collaboration between functional silos, or to create crossfunctional teams instead of component teams.
Module 6: Requirements Management
Understanding the core concepts and benefits of iterative requirements management is vital to the successful implementation of a fast, reliable software delivery process.
Module 7: Development Practices
This module demonstrates commonly used practices like branching strategies and advanced version control topics such as feature flags and branch by abstraction. Furthermore, practices like pair programming, unit testing, TestDriven-Development and others are introduced.
Module 8: Continuous Delivery
Participants learn how Continuous Delivery reduces time-to-market for software products and makes releases painless, low-risk events. See how to reduce the risk of deployments through effective release management combined with release engineering techniques such as canary releases, dark launching, blue-green deployments, and the expand/contract pattern.
Module 9: Metrics
Shows the value of common agile KPIs such mean time to detect (MTTD), mean time to resolution (MTTR), deployment frequency, deployment success rates, and others as ways to learn about an organization’s maturity level.
Module 10: DevOps vs. ITIL
Despite the common misconception that ITIL and DevOps contradict each other, this module demonstrates how ITIL principles and practices can be implemented in a lightweight way to achieve the goals of effective service management and enable rapid, reliable delivery.

Zielgruppe
Der Workshop richtet sich an alle die den Software Delivery Process verbessern möchten, insbesondere an Software Engineers, Programmierer, Testers, QA, System Administratoren und Datenbank Administratoren.

Level 200
(Level Skala: 100= Strategisch/ 200= technischer Überblick/ 300=tiefe Fachkenntnisse/ 400= technisches Expertenwissen)

Sprache
Dieser Workshop wird in deutscher Sprache gehalten. Es werden hauptsächlich Englisch sprachige Kursunterlagen verwendet.

Anmeldung
Zur Anmeldung wenden Sie sich bitte direkt an Ihren Microsoft Technical Account Manager oder besuchen Sie uns im Web auf Microsoft Premier Education. Dort finden Sie eine Gesamtübersicht aller offenen Workshops, für die Sie sich dort auch gleich anmelden können

Schichtwechsel-Stories: Rouven Kasten - vom Elektriker zum Referent für digitale Kommunikation

$
0
0

Anfang Mai fand in Berlin wieder die republica, Europas größte Digitalkonferenz statt und wir durften dort unter dem Hashtag #Schichtwechsel zeigen, wie digitale Technologien unser Leben verändern. Aber auch auf persönlicher Ebene gibt es #Schichtwechsel - einige inspirierende Beispiele möchten wir in dieser Blog-Serie porträtieren.

Was ist deine persönliche #Schichtwechsel-Story?

Anfangs war ich ziemlich faul und hatte teils schlechte Noten. Auch ein Abi gab es nie, die Lehrer der Grundschule gaben mir maximal ein „ok“ für die Gesamtschule. Ich wurschtelte mich so durch. Hausaufgaben und lernen war früher nie mein Ding, auch die Schulband tat Ihr übriges dazu und hielt mich von allem ab, außer Gitarre spielen.

 

Foto: Tilman Schenk

Nachdem ich die 10. Klasse wiederholte um vielleicht doch Abi machen zu können, musste ich dann die Schule verlassen. Also galt es mich zu bewerben um einen klassischen Ausbildungsplatz. Da ich mich in meiner Jugend schon für Elektronik interessiert, ging es dann auch in diese Richtung. Nun lernte ich „Energieanlagen  Elektroniker - Fachrichtung Betriebstechnik“ bei einem der damals größten Stahlwerke in Duisburg. Zusammen mit knapp 500 Azubi eingeengt in einer Ausbildungswerkstatt war eine Woche nach der Probezeit klar, dass ich das nie in meinem Leben machen wolle.

Dennoch beendete ich die 3,5 Jahre Ausbildung um anschließend meinen Zivildienst in den Duisburger Kliniken zu absolvieren. Auch die Idee hier einen Beruf aufzugreifen verwarf ich nach dem ein oder anderen Gang in die Pathologie. Dieser Beruf hätte mich sicher kaputt gemacht, ich habe einen sehr großen Respekt gegenüber dem Pflegepersonal.

Obwohl ich 1987 einen Atari 130 XE zu Weihnachten bekam, diesen später durch einen 1024 ST ersetzte um diesen dann 1989 mit einem Onkel gegen eine Fender Stratocaster zu tauschen, war ich nach dem Zivildienst dem PC so fern wie Amerika dem Kaukasus. Erst mein Zivi-Kollege Sven, der gefühlt der erste Duisburger mit Modem war, zeigte mit dann dieses „Internet. Das hat mich sofort in seinen Bann gezogen. Seit dem Lebe und Arbeite ich mit dem Internet und seinen grenzenlosen Möglichkeiten der Kommunikation.

Ich lernte HTML und war quasi Pionier, der zunächst mit diversen Praktika und später dann als fester Webdesigner und Programmierer in einigen Agenturen Düsseldorfs durchstartete. Als 2003 die erste große Webblase platze erwischte es auch die Agentur in der ich damals arbeitete, ich machte mich umgehend selbstständig. Nach fast acht Jahren der Selbstständigkeit wechselte ich dann doch wieder in die Festanstellung und arbeite nun vorrangig im Social Web.

 

Foto: Dan Taylor

Im Jahr 2015 kam ich dann zu GLS Bank und arbeite nun erstmals auf der Unternehmensseite. Meine heutige Aufgabe ist es mich um alle digitalen Kanäle wie Webseite, Blog, Social Media oder Newletter zu kümmern. Dies war sicher mein größter persönlicher #Schichtwechsel der letzten Jahre.

Was war aus deiner Sicht der bedeutendste #Schichtwechsel der letzten 10 Jahre?

Persönlich wie schon oben beschrieben der Wechsel zur GLS Bank, technisch finde ich zum einen den Breitbandausbau und das Smartphone. Apple hat im Jahr 2007 mit dem Smartphone allen Menschen einen komplett neuen Weg geebnet miteinander viel einfacher zu kommunizieren.

Auf welchen #Schichtwechsel in der Zukunft bist du besonders gespannt?

Durch meine eigene Veränderung in meiner Lebensweise glaube und hoffe ich, dass die Technik uns hilft eine Zukunft zu gestalten, die auch unseren Kindern und deren Kindern einen ordentlichen Planeten hinterlässt. Mal abgesehen von allen technischen Spielereien, ist die Industrie auch gefragt solche Dinge viel stärker voran zu treiben.

Welchen Tipp würdest du jemandem geben, der einen #Schichtwechsel vor sich hat?

Machen - einfach machen, denn Du bist der einzige der sein Leben und seinen Weg in der Hand hat. Es lohnt nicht darauf zu schauen was andere machen oder dem gleich zu ziehen. Auch muss man selbst lernen aufzustehen, wenn man einmal auf die Nase fällt. Ist mir auch oft genug passiert aber dann heißt es einfach: „Aufstehen und weiter!“. Den KPI für den eigenen Erfolg kann man immer selbst wählen, Hauptsache man ist glücklich damit was man tut. Ich bin es. 😉

Was ist deine persönliche Love out Loud-Botschaft?

Das Internet ist immer noch ein guter und lauter Ort wenn wir es dazu machen. Gebt es nicht den Arschlöchern.

Weitere Schichtwechsel-Stories:

Magdalena Rogl - von der Kinderpflegerin zum Head of Digital Channels

Sascha Pallenberg - vom Techblogger zum Head of Digital Content

Franziska Ferber - von der Unternehmensberaterin zum Kinderwunsch-Coach

Torsten Schiefen - vom Security-Mann zum Kommunikationsmanager

Tanja Cappell - von der Produktmanagerin zur Lettering Influencerin

Christiane Germann - von der Beamtin zur Social Media Expertin

Rob Vegas - vom Politikwissenschaftler zum Social Media Onkel


Ein Beitrag von Magdalena Rogl
Head of Digital Channels Microsoft Deutschland

magdalena_rogl

Influencer Relations ist für @lenarogl kein Buzzword, sondern ihre Leidenschaft.
Die richtigen Menschen miteinander zu verbinden und Kontakte zu knüpfen, hält sie privat wie beruflich für eine große Bereicherung - vor allem, weil es dabei immer spannende Geschichten zu erfahren gibt. 

I Vallensbæk forbedrer eleverne indeklimaet for at blive Danmarks dygtigste

$
0
0

Vallensbæk Kommune har lanceret et treårigt projekt baseret på dataindsamling via sensorer for kommunens 7. klasser. Det gør elever og lærere i stand til at arbejde aktivt med at forbedre indeklimaet. Målet er at skabe et optimalt indeklima, så indlæringen øges og kommunens elever opnår et karaktergennemsnit, der bringer dem i top 25 på landsplan.

Vallensbæk Kommune har udviklet en visionær digital strategi for kommunen, der dækker over flere aspekter og arbejder med at forbedre bymiljøet i kommunen ved hjælp af digitale løsninger.

Skoleområdet er et nøgleområde for kommunen. Projektet har en ambitiøs målsætning om at analysere og justere indeklimaet og øge indlæringen ved hjælp af små højteknologiske sensorer i klasselokalerne. Undersøgelser viser, at i 60% af de danske folkeskoler er indeklimaet så ringe, at eleverne har svært ved at suge viden til sig og præsterer dårligere i nationale tests. Når eleverne om tre år forlader folkeskolen, skal arbejdet med klassens indeklima gerne afspejle sig i karaktererne, så kommunen bliver blandt Danmarks 25 bedste skoler målt på karaktergennemsnittet.

Der er opsat sensorer i otte klasser, og sensorerne gør det muligt at følge udviklingen for klassens indeklima over en længere periode og få data fremstillet visuelt, f.eks. i form af grafer. Data bliver tilgængelige i en cloud-baseret Microsoft løsning, så både elever, lærere og forældre kan følge med i udviklingen i indeklimaet i realtid. Læs mere her om Microsoft Azure.

”Ambitionen er klar, vores elever skal være i top 25 målt på karakter på landsplan. Vi håber, at arbejdet med indeklimaet har en positiv indvirkning på indlæringen og er med til at skabe optimale rammer for, at eleverne kan udvikle sig fagligt, så de opnår de bedst mulige karakter. Samtidig vil vi også gerne tilbyde optimale arbejdsforhold for vores medarbejdere, siger Eskil Frøding, Digital Læringskonsulent i Vallensbæk Kommune.

Ved skoleårets start blev der i august opsat sensorer i 7. klassernes lokaler. Frem til efterårsferien har de indsamlet data, uden at elever og lærere har ageret på dem. Disse data fungerer som sammenligningsgrundlag. Den digitale løsning, der ligger til grund for indsamlingen er baseret på den nyeste teknologi.

”Vi har skabt en digital løsning baseret på den nyeste teknologi inden for Internet of Things, hvor vi trækker data fra små sensorer, lagrer det i skyen og analyserer det via Machine Learning og Analytics. Det lyder måske kompliceret, men løsningen er så intuitiv, at elever og lærere kan arbejde med data og skabe et bedre indeklima”, siger Jon Bille, direktør i Bluefragments, der er certificeret Microsoft Partner.

De nye digitale løsninger skaber rammerne for en mere kreativ og engagerende undervisning. Der er blevet udviklet nyt undervisningsmateriale til hele årgangen, så elever og lærere sætter indeklima, CO2 og betydningen af grænseværdier på skoleskemaet i deres naturfaglige fag.

Vi vil gerne inddrage eleverne aktivt i arbejdet med at skabe et godt indeklima og gøre dem bevidste om betydningen af CO2 og iltmætning i luften på en naturlig måde. Når vi kan indsamle og analysere data om indeklimaet og anvende de tal i vores undervisning bliver digitalisering en grundsten i undervisningen, siger  Charlotte Kornmod, skoleleder på Egholmskolen.

Vallensbæk Kommune og Microsoft Danmark har netop indgået en ”Memorandum of Understanding”. Det betyder, at Microsoft og Microsofts it-partnere vil hjælpe kommunen med at realisere den digitale vision og gøre kommunen til en førende SmartCity kommune, hvor innovation og teknologi åbner helt nye muligheder og perspektiver. En SmartCity, der bruger teknologi og data til at skabe bedre rammer for et trygt liv.

Fakta

I hver klasse er der monteret to sensorbokse. Den ene måler temperatur og luftfugtighed, den anden måler luftens CO2-indhold. Denne teknologi kaldes Internet of Things (IoT). IoT gør hverdagens elektroniske apparater intelligente, så de kan opsamle data. IoT refererer til enheder, der er udstyret med sensorer, der indsamler information.

Informationen bliver omsat til viden ved hjælp af analyseværktøjer. Disse data præsenteres via en brugergrænseflade, typisk en hjemmeside eller en app, hvor der kan være forskellige besked- eller alarmniveauer for de enkelte enheder.

Målinger

Eleverne måler temperatur, luftfugtighed og CO2 og diskuterer grænseværdier med udgangspunkt i de anbefalede ’mål’.

Arquitectura de “Remote Desktop Services” (RDS) y sus roles principales

$
0
0

Arquitectura de Remote Desktop Services (RDS) y sus roles principales

Hola a todos nuevamente!!!! En esta oportunidad les voy a contar un poco acerca de la arquitectura de RDS, sus componentes, y las diferentes funciones de cada rol.

En Windows Server 2008 R2 (WS2008R2), Terminal Services (TS) ha sido más desarollado y renombrado como Remote Desktop Services (RDS). RDS es la columna vertebral de las Soluciones de infraestructura de escritorios virtuales (VDI) de Microsoft. De la misma manera, en Windows Server 2012, RDS se ha desarollado aún más al tener una caracterí­stica de configuración basado en escenarios que pueden ser configurados a través de asistentes de configuración (wizards). Aún así­, los conceptos y la Arquitectura para RDS permanecen prácticamente idénticos desde WS2008R2. La nueva y mejorada Arquitectura, saca provecho de la virtualización, y hace que el acceso remoto sea una solución más flexible con la implementación de nuevos escenarios. Para tener una idea de la capacidad de RDS, resulta esencial comprender las funciónes de sus componentes principales de la arquitectura, a la vez de como interactúan entre sí­ para procesar un requerimiento de RDS. Existe una nueva terminologí­a y acrónimos con los cuáles debemos familiarizarnos dentro del contexto de RDS. Para los fines de este post, vale la pena mencionar que RDS implica la Plataforma de Windows 2008 R2 y posteriores, mientras que TS (Terminal services), implica solamente Windows 2008..

Existen cinco roles primordiales dentro de la Arquitectura de RDS, como se puede observar en la imagen siguiente, y todos requieren de un servidor de licenciamiento RDS. Cada componente incluye un conjunto de caracterí­sticas diseñadas para adquirir ciertas funciones especí­ficas dentro de la Arquitectura RDS. Juntos, los cinco roles componen una estructura para el acceso de aplicaciones de "Terminal Services", escritorios remotos, y escritorios virtuales. En esencia, WS2008R2 y posteriores, ofrecen un conjunto de caracterí­sticas con funciones especí­ficas para el diseño de la infraestructura de acceso remoto de una empresa.

 

 

Para comenzar, un usuario final acceder a la URL de RDS, la cuál será la URL que contenga los recursos publicados (Aplicaciónes). La interface, provista por Remote Desktop Web Access (RDWA), y configurada a través de un Internet Information Services (IIS) con SSL, es el punto de acceso Web para RemoteApp y VDI. La URL de acceso, es consistente independientemente de como os recursos son organizados, compuestos, y publicados desde mútiples servidores de sesión RDS por detrás. Por defecto, RDS publica los recursos en el siguiente formato de URL https://FQDN-del-Webaccess-server-RDWA/rdweb y esta URL es la única información que el administrador deberá proveer a los usuarios finales, para acceder a los recursos autorizados a través de RDS. Un usuario final necesitará ser autenticado con sus credenciales de AD cuando acceda a la URL del RDWA, haciendo que las aplicaciones y recursos publicados sean "presentados" al usuario final en base a los permisos otorgados en la lista de control de accesos. Es decir, el usuario final solo podrá ver y acceder aquellos recursos, los cuáles su cuenta de AD posee el permiso requerido.

 

 

 

Remote Desktop Gateway (RDG) es opcional y funciona de la misma manera que en Terminal Services. Un RDG se ubica en el borde de la red corporative para filtrar los requerimientos externos de RDS, en base al criterio de acceso definido en un servidor llamado Network Policy Server (NPS). Basado en certificados, RDG ofrece acceso remoto seguro hacia la infraestructura de RDS. Para el administrador de sistemas, el RDG es la frontera de una red de RDS. Hay dos políticas de acceso principales definidas en un servidor NPS relacionados con un RDG:

  • Una es la polí­tica de autorización de conexión o CAP. Prácticamente es una lista de autorización de usuarios finales, que muestra quiénes pueden conectarse al RDG
  • La otra es la polí­tica de autorización de acceso a los recursos o RAP. En esencia, esta es una lista de acceso a recursos, que especifí­ca a que dispositivos un usuario final de CAP, podrá conectarse con un RDG asociado.

 

 

En RDS, las aplicaciónes son instaladas y publicadas en un Remote Desktop Session Host (RDSH), similar a un TS Session Host, o simplemente un Terminal Server en un escenario de TS. Un RDSH carga las aplicaciónes, las ejecuta, y muestra los resultados. El "Sign-in Digital" puede ser fácilmente habilitado en un RDSH con un certificado. Multiples RDSH pueden ser configurados con tecnologí­a de "balanceo de carga". Esto requiere que cada RDSH en un grupo de balanceo de carga necesite ser configurado idéntico de la misma manera, y con exactamente las mismas aplicaciónes.

Una importante mejora en RDSH (a comparación de TS Session Host), es la habilidad para "mostrar" las aplicaciónes publicadas a usuarios finales, basándose en la lista de acceso (ACL) de la aplicación. Un usuario final autorizado, podrá solo acceder a aquellas aplicaciónes publicadas para las cuáles fue autorizado dentro de la ACL. Por defecto, el grupo de usuarios "Everyone", se encuentra autorizado a acceder, por ende todo usuario conectado podrá conectarse a dicha aplicación.

 

 

Remote Desktop Virtualization Host (RDVH) es una nueva caracterí­stica, la cuál atiende los requerimientos para escritorios virtuales ejecutándose en máquinas virtuales, o asignación de máquinas virtuales en sí­. Un servidor RDVH se encuentra basado en Hyper-V, por ejemplo un servidor Windows con el rol de Hyper-V habilitado. Al momento de atender un requerimiento de usuario con necesidad de asignación de una VM, un servidor RDVH automáticamente iniciará una VM, en el caso que la misma ya no se encuentre ejecutándose. Paso siguiente, el usuario final requerirá colocar sus credenciales al ingresar al escritorio virtual. Sin embargo, un RDVH no acepta de manera directa los requerimientos de conexión, utiliza en cambio un RDSH como "redirector" para atender los requerimientos basados en VMs. El par de un RDVH junto con su "redirector", es definido dentro del Remote Desktop Connection Broker (RDCB) al momento de agregarse un recurso basado en RDVH.

 

 

Remote Desktop Connection Broker (RDCB), como una expansión de lo que es Terminal Services Session Broker en TS, Proporciona una experiencia unificada para configurar el acceso de los usuarios a las aplicaciones TS tradicionales y a los escritorios virtuales basados en máquinas virtuales (VM). Aquí­, un escritorio virtual puede estar ejecutándose tanto en en una VM designada, o una VM dinámicamente asignada, basándose en la carga de balanceo, desde un "pool" definido de VMs. Un administrador de sistemas utilizará la consola de RDCB, llamada Remote Desktop Connection Manager, para agregar RDSHs, TS Servers, y RDVHs, como así también aquellas aplicaciónes publicadas por los RDSHs y TS Servers. De la misma manera, aquellas VMs ejecutándose en los RDVHs, podrán ser publicadas luego a través de la URL del RDWA. Una vez autenticados los usuarios finales en esta URL del RDWA, los usuarios podrán acceder a las aplicaciones autorizadas (RemoteApp), y escritorios virtuales.

 

 

Un cliente de Remote Desktop (RD) obtiene información de la conexión a realizar desde el servidor RDWA dentro de una estructura de RDS. Si el cliente RD se encuentra por fuera de la red corporativa, el cliente se conectará a través del RDG. En cambio si el cliente RD se encuentra dentro de la red corporativa, el cliente podrá luego conectarse de manera directa tanto hacia un RDSH, como también hacia un RDVH, toda vez que el RDCB provea la información de la conexión. En ambos casos, el RDCB juega un papel primordial a la hora de proveer asegurar al cliente RD, el acceso al recurso apropiado. Mediante el uso de certificados, el administrador de red puede configurar el Single Sign ON (SSO) entre los varios componentes de RDS, para proveer al usuario final una experiencia confortable y segura.

En la imágen de aquí debajo podemos observar la función que desempeña cada uno de los roles explicados, de manera gráfica:

 

 

Conceptualmente, el RDCB es el "jefe de operaciones" dentro de una Arquitectura de RDS, y sabe dónde esta cada recurso, con quién contactarse, y que realizar con cada petición de RDS. Antes de que una conexión lógica pueda establecerse entre un cliente y un RDSH o RDVH de destino, el RDCB actúa como un "enlace", enviando la información pertinente "desde y hacia" los diferentes componentes, al momento de atender una petición de RDS.

 

EN SINTESIS

 

Desde una vista más general, el cliente RD utiliza RDWA/RDG para obtener acceso hacia un RDSH o RDVH, mientras RDCB conectará al cliente RD con una sesión hacia el RDSH destino, o hacia una VM configurada en un RDVH de destino.

 

Espero que luego de haber leí­do este post, les quede más claro cuál es la función de cada rol dentro de una arquitectura de RDS, y les pueda ser de utilidad.

Hasta el próximo post!!!

 

 

 

How Parquet.Net from Elastacloud Will Empower your Big Data Applications

$
0
0

By Andy Cross, COO of Elastacloud

You may have heard of Apache Parquet, the columnar file format massively popular with at scale data applications, but as a Windows based .NET developer you've almost certainly never worked with it. That's because until now, the tooling on Windows has been a difficult proposition.

Parquet tooling is mostly available for Java, C++ and Python, which somewhat limits the .NET/C# platform in big data applications. While C# is a great language, we developers lagged behind our peers in other technology spheres in this area.

According to http://parquet.apache.org:

Apache Parquet is a columnar storage format available to any project in the Hadoop ecosystem, regardless of the choice of data processing framework, data model or programming language.

Before now, "regardless of the […] programming language" had a subtext of "provided it's not .NET".

What is Parquet.Net?

Changing all of this is the new Elastacloud sponsored project, Parquet.Net. Fully Open, licensed under MIT and managed on Github, Parquet.Net is a library for modern .NET that enables the reading and writings of Parquet files inside the .NET framework. Distributed on NuGet, Parquet.Net is easy to get started with, and is ready to empower your Big Data applications from your enterprise .NET platform.

What Scenarios would I use Parquet.Net?

E-commerce company BuyFashion.com, an up and coming online fashion retailer, have experienced enormous sustained growth over the past five years. They have recently decided to build a team of Data Scientists in order to take advantage of the analytics edge that they have long been following in the tech media.

Their existing platform is a Microsoft .NET Framework ecommerce system and the majority of their existing team are skilled in C# and familiar with building large scale applications in this stack. The company’s new Data Science team have chosen to use Apache Spark on top of Azure HDInsight and are skilled in Scala and related JVM languages.

The Data Science team need to consume feeds that come from the core engineering team so that the system interoperates in a seamless manner. The initial implementation interoperates based on a JSON message format. Whilst this is simple from the point of view of the core engineering team, it places a significant burden on the Data Science team, as they have to wrangle the data.

Thankfully, the core engineering team discovered Parquet.Net, meaning that they can eliminate the need for wrangling the Data Science team need to do by natively writing the interop feed in Parquet directly from .NET. As shown above in blue, the new piece of the tooling puzzle solves significant complexities in the Data Science process.

How do I use Parquet.Net?

Reading files

In order to read a parquet file, you'll need to open a stream first. Due to the fact that Parquet utilises file seeking extensively, the input stream must be readable and seekable. This somewhat limits the amount of streaming you can do, for instance you can't read a parquet file from a network stream as we need to jump around it, and therefore you have to download it locally to disk and then open.

For instance, to read a file c:test.parquet you would normally write the following code

using System.IO;
using Parquet;
using Parquet.Data;

using(Stream fs = File.OpenRead("c:\test.parquet"))
{

    using(var reader = new ParquetReader(fs))
    {

        DataSet ds = reader.Read();
    }

}

this will read entire file in memory as a set of rows inside DataSet class.

Writing files

Parquet.Net operates on streams, therefore you need to create it first. The following example shows how to create a file on disk with two columns - id and city.

using System.IO;
using Parquet;
using Parquet.Data;

var ds = new DataSet(new SchemaElement<int>("id"),new SchemaElement<string>("city"));
ds.Add(1, "London");
ds.Add(2, "Derby");

using(Stream fileStream = File.OpenWrite("c:\test.parquet"))
{

    using(var writer = new ParquetWriter(fileStream))
    {
        writer.Write(ds);
    }

}

What about quick access to data?

Parq is a tool for Windows that allows the inspection of Parquet files. There are precious few tools that fit this category, and so when we were investing into Parquet.Net we thought we'd build a console application that at least begins to address the deficit.

There are three distinct output formats that the parq tool supports:

  • Interactive
  • Full
  • Schema

The Interactive mode transforms the console into a navigation tool over the parquet file you supply. You can use the arrow keys to move around the dataset and look at summarised data.

The Full mode outputs the whole summarised dataset to the console window, with columns truncated at a configurable size but designed to allow tooling interoperability with a formatted output.

The Schema mode outputs the list of columns, which is useful if you're a developer looking to wrap Parquet data sets into strongly typed models.

Next Steps

Visit here to find resources and additional information about the Parquet.Net project.

Viewing all 36188 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>