استراتيجيات باكتستينغ مع R.
2018/05/06.
الفصل 1 مقدمة.
تم تصميم هذا الكتاب ليس فقط لإنتاج إحصاءات عن العديد من الأنماط التقنية الأكثر شيوعا في سوق الأوراق المالية، ولكن لإظهار الصفقات الفعلية في مثل هذه السيناريوهات.
اختبار استراتيجية؛ ترفض إذا كانت النتائج غير واعدة.
تطبيق مجموعة من المعلمات لاستراتيجيات التحسين.
محاولة قتل أي استراتيجية تبدو واعدة.
اسمحوا لي أن أشرح أن آخر واحد قليلا. فقط لأنك قد تجد استراتيجية يبدو أن تتفوق على السوق، لديها ربح جيد وانخفاض انخفاض هذا لا يعني أنك قد وجدت استراتيجية لوضع للعمل. على العكس من ذلك، يجب أن تعمل على دحض ذلك. لا شيء أسوأ من وضع استراتيجية غير مربحة للعمل لأنه لم يتم اختبارها بشكل صارم. سنتناول ذلك لاحقا.
1.1 الموارد R.
يفترض هذا الكتاب لديك على الأقل معرفة العمل الأساسية للمنصة R. إذا كنت جديدا على R أو تحتاج إلى تجديد، يجب أن يكون الموقع التالي مفيدا:
وبالإضافة إلى ذلك، يمكن العثور على الطرود المستخدمة في هذا الكتاب تحت تراداناليتيكش المتوقعة على R-فورج. سوف تجد المنتديات وشفرة المصدر التي ساعدت إلهام هذا الكتاب.
وأوصي أيضا أن تقرأ العروض غي يولين على باكتستينغ فضلا عن استخدام عرض كوانتسترات من قبل جان هوم وبريان بيترسون.
وليس المقصود من هذا الكتاب أن يحل محل أي من الموارد الموجودة على استراتيجيات الاختبار الخلفي في R. بدلا من ذلك، فإن القصد هو تعزيز وتبسيط تلك الموارد. إذا لم يتم تناول شيء ما في هذا الكتاب قراءة العروض أعلاه.
أيضا، هذا الكتاب مفتوح المصدر. أي شخص هو موضع ترحيب للمساهمة. يمكنك العثور على شفرة المصدر المتاحة على حسابي جيثب.
1.2 المكتبات.
المكتبة المطلوبة فقط اللازمة لتشغيل استراتيجيات باكتستينغ هو كوانسترات. سوف كوانسترات تحميل جميع المكتبات المطلوبة بالإضافة إلى ذلك.
يتضمن هذا الإصدار من كوانستراتات الحزم التالية، من بين أمور أخرى:
مع هذه المكتبات سيكون لدينا كل ما نحتاج إليه لاختبار كامل الاستراتيجيات وقياس الأداء. راجع 1.3 سيسيونينفو لمزيد من التفاصيل.
المكتبات الإضافية التي قد نستخدمها للتحليل أو عرض الكتاب:
استراتيجيات باكتستينغ مع R.
في هذا الكتاب نستخدم كوانسترات مكتبة الإصدار 0.9.1739. كوانسترات يوفر وظائف قاعدة سوف نستخدمها لبناء استراتيجياتنا. مضيفا المؤشرات والإشارات وخلق قواعد من متى لشراء ومتى للبيع.
كوانسترات هو لاستراتيجيات التداول القائمة على إشارة، وليس على أساس الوقت. ومع ذلك، يمكنك إنشاء وظائف إضافة إشارات استنادا إلى أطر زمنية وتنفيذ تلك الوظائف كمؤشرات. سنصل إلى ذلك لاحقا.
كوانسترات كما يسمح لنا لاختبار استراتيجية على واحد أو العديد من الرموز. الجانب السلبي لاستخدام العديد من الرموز هو أنه يمكن أن يكون كثيفا الموارد. يمكننا أيضا اختبار استراتيجيات مع مجموعة من المعلمات. قل، على سبيل المثال، تريد اختبار استراتيجية سما بسيطة ولكن تريد العثور على أفضل أداء سما المعلمة. كوانسترات يسمح لهذا. ومرة أخرى، فإنه يمكن أن يكون مكثفا للموارد.
3.1 الإعدادات والمتغيرات.
سيتم استخدام الإعدادات المذكورة هنا في جميع الاختبارات الخلفية. وهي مطلوبة؛ سوف تحصل على أخطاء إذا قمت بتشغيل أي من الاستراتيجيات دون تضمين الإعدادات والمتغيرات أدناه. بعض هذه قد تتغير اعتمادا على الاستراتيجية التي سيتم ملاحظتها.
أولا نستخدم Sys. setenv () لتعيين منطقتنا الزمنية إلى أوتك.
بعد ذلك، نظرا لأننا سنعمل مع الأسهم في السوق الأمريكية، نحتاج إلى تعيين كائن العملة إلى الدولار الأمريكي.
عند استراتيجيات باكتستينغ يجب أن تشمل دائما فترات الاضطراب في السوق. بعد كل شيء، كنت لا تريد أن ترى فقط كيف تؤدي الاستراتيجية الخاصة بك عندما يكون السوق قويا ولكن أيضا عندما يكون ضعيفا. لهذا الكتاب سنستخدم عامي 2008 و 2009.
init_date: تاريخ سنقوم بتهيئة حسابنا وأشياء محفظة. يجب أن يكون هذا التاريخ هو اليوم السابق للبدء.
start_date: تاريخ أول بيانات لاستردادها.
end_date: آخر تاريخ لاسترداد البيانات.
init_equity: حقوق الملكية الأولية.
تعديل: منطقية - ترو إذا كان علينا أن ضبط أسعار توزيعات الأرباح، انشقاقات الأسهم، وما إلى ذلك؛ خلاف ذلك، فالس.
يجب أن تعمل دائما مع التسعير المعدل عندما يكون ذلك ممكنا لتعطيك أفضل النتائج.
3.2 الرموز.
معظم استراتيجياتنا سوف تستخدم ثلاثة إتف's: إوم، كق و سبي. هذا فقط لأغراض العرض التوضيحي. يتم تحميلها في base_symbols ().
حيث قد نرغب في اختبار الاستراتيجيات على نطاق أوسع قليلا، سنستخدم tools_symbols () الذي يضيف base_symbols () و تلت و سلبر إتف's زلب و شل و زلف و زلي و زلك و زلب و زلو و زلف و زلي.
وأخيرا، يجوز لنا استخدام global_symbols () للحصول على نظرة أفضل حول الاستراتيجية. ومع ذلك، فإن أغراض هذا الكتاب هو لإظهار كيفية باكتست الاستراتيجيات، وليس للعثور على استراتيجيات مربحة.
3.3 تشيكبلوتيروبديت ()
وتأتي وظيفة تشيكبلوتيروبديت () من باب المجاملة غاي يولين. الغرض من هذه الدالة هو التحقق من وجود اختلافات بين كائن الحساب وكائن الحافظة. إذا كانت الدالة ترجع فالس يجب علينا أن نفحص لماذا (ربما لم نكن مسح كائناتنا قبل تشغيل الاستراتيجية؟).
كوانسترات ترادر.
التداول، كوانتسترات، R، وأكثر من ذلك.
تكرار فولاتيلتي إتن إرجاع من كبو الآجلة.
هذه الوظيفة سوف تظهر كيفية تكرار إتنس التقلبات (زيف، فس، زيف، فسز) من كبو الآجلة، مما يسمح لأي فرد لخلق عوائد إتف الاصطناعية من قبل إنشائها، خالية من التكلفة.
لذلك، قبل أن أحصل على الخوارزمية الفعلية، فإنه يعتمد على تحديث لخوارزمية بنية المصطلح لقد شاركت بعض الأشهر مرة أخرى.
في تلك الخوارزمية، عن طريق الخطأ (أو لغرض البساطة)، استخدمت أيام التقويم كما الوقت لانتهاء، عندما كان ينبغي أن يكون أيام عمل، والذي يمثل أيضا عطلة نهاية الأسبوع، والأعياد، والتي هي قطعة أثرية مزعجة لتتبع .
حتى هنا & # 8217؛ s تغيير واضح، في حلقة أن يحسب مرات لانتهاء الصلاحية:
الخط البارز على وجه الخصوص هو:
ما هي هذه الوظيفة بيزدايس؟ ويأتي من حزمة بيزديس في R.
هناك & # 8217؛ s أيضا التداولHholidays. R النصي، الأمر الذي يجعل استخدام مزيد من حزمة بيزديس. هنا & # 8217؛ ق ما يجري تحت غطاء محرك السيارة في TradingHolidays. R، لأولئك الذين يرغبون في تكرار رمز:
هناك نوعان من ملفات كسف التي تم تجميعها يدويا، لكنهما سيشاركان لقطات من & # 8211؛ وهما عطلة عيد الفصح (لأنهما يجب تعديلهما لتحويلهما من يوم الأحد إلى يوم الجمعة بسبب عطلة عيد الفصح) وبقية أيام العطل الوطنية.
هنا هو ما يشبه كسف يسير:
و نونيسترهوليدايس، الذي يحتوي على رأس السنة الجديدة & # 8217؛ ق يوم، ملك يوم الابن، الرئيس & # 8217؛ يوم، يوم الذكرى، يوم الاستقلال، يوم العمل، عيد الشكر، ويوم عيد الميلاد (جنبا إلى جنب مع التواريخ التي لوحظت) كسف:
وعلاوة على ذلك، نحن بحاجة إلى تعديل لمدة يومين أن الأسهم لم تتداول بسبب إعصار ساندي.
لذلك، فإن قائمة الأعياد تبدو كما يلي:
حتى مرة واحدة لدينا قائمة من الأعياد، ونحن نستخدم حزمة بيزديس لضبط العطلات وعطلات نهاية الأسبوع (السبت والأحد) كما لدينا أيام غير تجارية، واستخدام هذه الوظيفة لحساب الأوقات الصحيحة لانتهاء الصلاحية.
لذلك، والآن بعد أن لدينا بنية انتهاء الصلاحية المحدثة، يمكننا كتابة وظيفة من شأنها أن تكرر بشكل صحيح الأربعة الرئيسية إتنس و # 8211؛ زيف، فس، زيف، و فسز.
هنا & # 8217؛ s تفسير اللغة الإنجليزية:
فس تتكون من عقدين & # 8211؛ الشهر الأول، والشهر الخلفي، ولها عدد معين من أيام التداول (أيام عمل أكا) أنه يتداول حتى انتهاء، ويقول 17. خلال هذا الإطار الزمني، والشهر الامامي ( واسمحوا & # 8217؛ ق يطلق عليه M1) يذهب من كونه تخصيص كامل للأموال، لأنه لا شيء من تخصيص الأموال، كما يقترب عقد الشهر الأمامي انتهاء. بمعنى أن العقد الثاني ينتهي، فإن العقد الثاني يتلقى تدريجيا المزيد والمزيد من الوزن، حتى عند انتهاء عقد الشهر الأول، فإن عقد الشهر الثاني يحتوي على جميع الأموال & # 8211؛ تماما كما * يصبح * الشهر الأول عقد. لذلك، يقول لديك 17 يوما لانتهاء في الشهر الأمامي. عند انتهاء العقد السابق، سيكون الشهر الثاني وزن 17/17 & # 8211؛ 100٪، حيث يصبح الشهر الامامي. ثم، في اليوم التالي، هذا العقد، والآن في الشهر الأمامي، وسوف يكون وزن 16/17 عند تسوية، ثم 15/17، وهلم جرا. ويسمى هذا البسط الدكتور، ويسمى القاسم دت.
ومع ذلك، أبعد من ذلك، هناك & # 8217؛ والآلية الثانية التي هي المسؤولة عن فس يشبه أنه بالمقارنة مع العقد الآجلة الأساسية (وهذا هو، الاضمحلال المسؤول عن تقلبات قصيرة & # 8217؛ s الأرباح)، و وهذا هو & # 8220؛ لحظية & # 8221؛ إعادة التوازن. وهذا يعني أن عوائد يوم معين هي اليوم التي تستقر مضروبا بأوزان الأمس، على مدى يوم أمس، ستستقر مضروبا بأوزان الأمس، ناقص واحد. (S_1_t * در / dt_t-1 + S_2_t * 1-در / dt_t-1) / (S_1_t-1 * در / dt_t-1 + S_2_t-1 * 1-در / dt_t-1) & # 8211؛ 1 (يمكنني استخدام البرنامج التعليمي على اللثي). لذلك، عندما تتحرك إلى الأمام يوم، حسنا، غدا، اليوم & # 8217؛ s الأوزان تصبح t-1. ومع ذلك، متى كانت الأصول قادرة على إعادة التوازن؟ حسنا، في إتنس مثل فس و فسز، و & # 8220؛ باليد التلويح & # 8221؛ هو أنه يحدث على الفور. وهذا يعني أن وزن الشهر الأول كان 93٪، وتحقق العائد عند التسوية (أي من تسوية إلى تسوية)، وبعد أن تحققت هذه العودة مباشرة، يتحرك وزن الشهر الأمامي من 93٪ على سبيل المثال، 88٪. لذلك، يقول كريدي سويس (الذي يصدر هذه إتنس)، لديه 10،000 $ (فقط للحفاظ على الحساب وعدد الأصفار يمكن تحملها، ومن الواضح أن هناك الكثير في الواقع) يستحق الرابع عشر المعلقة بعد تحقيق عوائد على الفور، وسوف تبيع 500 $ من $ 9300 في الشهر الأمامي، وعلى الفور نقلها إلى الشهر الثاني، لذلك سوف تذهب على الفور من 9300 $ في M1 و 700 $ في M2 إلى 8800 $ في M1 و 1200 $ في M2. متى تحركت تلك $ 500؟ على الفور، وعلى الفور، وإذا أردت، يمكنك تطبيق القانون الثالث كلارك & # 8217؛ وندعوه & # 8220؛ سحرية & # 8221 ؛.
والاستثناء الوحيد هو يوم بعد يوم لفة، حيث يصبح الشهر الثاني ببساطة الشهر الأمامي كما تنتهي الشهر السابق، لذلك ما كان وزن 100٪ في الشهر الثاني سيكون الآن وزن 100٪ على الشهر الأمامي، لذلك هناك & # 8217؛ ق بعض التعليمات البرمجية الإضافية التي تحتاج إلى أن تكون مكتوبة لجعل هذا التمييز.
هذا هو الطريقة التي يعمل بها ل فس و زيف. ما الفرق بين فكس و زيف؟ انها بسيطة حقا & # 8211؛ بدلا من M1 و M2، فسز يستخدم نفس الأوزان بالضبط (وهذا هو، الوقت المتبقي في الشهر الأمامي مقابل عدد الأيام الموجودة لهذا العقد ليكون الشهر الأمامي)، يستخدم M4 ، M5، M6، و M7، مع M4 أخذ الدكتور / دت، M5 و M6 يجري دائما 1، و M7 يجري 1-د / دت.
على أية حال، هنا الرمز.
لذلك، شكرا جزيلا يخرج إلى مايكل كابلر من أدوات منظم المستثمر للقيام في الأصل النسخ المتماثل وتوفير التعليمات البرمجية له. بلدي التعليمات البرمجية أساسا يفعل الشيء نفسه، في، نأمل طريقة أكثر علق.
لذا، في نهاية المطاف، هل يعمل؟ حسنا، باستخدام رمز بنية المصطلح المحدثة، يمكنني اختبار ذلك.
بينما لا أعمل على لصق شفرة بنية المصطلح بأكملها (مرة أخرى، تتوفر هنا، فقط قم بتحديث البرنامج النصي مع تحديثاتي من هذه المشاركة)، وهنا & # 8217؛ كيف تقوم بتشغيل الوظيفة الجديدة:
وبما أنه يعود كل من فس يعود وعودة فسز، يمكننا مقارنتها على حد سواء.
ونتيجة ل:
في الأساس، مباراة مثالية.
دعونا نفعل الشيء نفسه، مع زيف.
لذلك، إعادة البناء من العقود الآجلة لا أفضل قليلا من إتن. ولكن المسار متطابق إلى حد كبير.
وبذلك يختتم هذا المنصب. آمل أن تسلط بعض الضوء على كيفية عمل هذه التقلبات إتنس، وكيفية الحصول عليها مباشرة من البيانات الآجلة التي نشرتها كبوي، والتي هي المدخلات لخوارزمي هيكل المدى الخاص بي.
وهذا يعني أيضا أنه بالنسبة للمؤسسات المهتمة بتداول إستراتيجيتي، يمكنهم الحصول على نفوذ لتداول المتغيرات المتراكبة الآجلة المركبة من إتنس، في حجم أكبر.
شكرا للقراءة.
ملاحظات: بالنسبة للراغبين في استراتيجية الاشتراك بالتجزئة للتداول التقلب، لا تترددوا في الاشتراك في بلدي التقلب استراتيجية التداول. للراغبين في توظيف لي بدوام كامل أو لمشاريع استشارية طويلة الأجل، ويمكن الوصول إلى بلدي ينكدين، أو بريدي الإلكتروني: ilya. kipnis@gmail.
(دون & # 8217؛ ر الحصول على) حتى في الضوضاء.
سيكون هذا المنصب حول التحقيق في كفاءة كونتانغو كإشارة تجارية تقلب.
بالنسبة لأولئك الذين يتداولون التقلب (مثلي)، وهو مصطلح قد ترى أن & # 8217؛ في كل مكان إلى حد ما هو مصطلح & # 8220؛ كونتانغو & # 8221 ؛. ماذا يعني هذا المصطلح؟
حسنا، بسيط: يعني فقط نسبة الشهر الثاني من العقود الآجلة فيكس على الأول. والفكرة هي أنه عندما يكون الشهر الثاني من العقود الآجلة أكثر من الأول، أن توقعات الناس للتذبذب أكبر في المستقبل مما هي عليه في الوقت الحاضر، وبالتالي فإن العقود الآجلة هي & # 8220؛ في كونتانغو & # 8221؛، وهو أكثر من مرة.
وعلاوة على ذلك، فإن تلك التي تحاول العثور على أفكار تجارية تقلب لائق قد تكون غالبا ما رأت أن العقود الآجلة في كونتانغو يعني أن عقد موقف تقلب قصيرة ستكون مربحة.
هل هذا هو الحال؟
حسنا، هناك & # 8217؛ s طريقة سهلة للرد على ذلك.
أولا، ارجع إلى منصبي للحصول على بيانات مستقبلية مجانية من كبوي.
باستخدام هذه البيانات، يمكننا الحصول على إشارة لدينا (أي، لتشغيل التعليمات البرمجية في هذا المنصب، تشغيل التعليمات البرمجية في هذا المنصب).
الآن، اسمحوا & # 8217؛ s الحصول على بياناتنا الرابع عشر (مرة أخرى، شكرا جزيلا للسيد هلموث فولمير لذلك يرجى تقديم ذلك.
الآن، هنا & # 8217؛ s كيف يعمل هذا: كما كبو دوسن & # 8217؛ t تحديث يستقر حتى حوالي الساعة 9:45 صباحا بتوقيت شرق الولايات المتحدة في اليوم التالي (إغ يوم الثلاثاء & # 8217؛ s تسوية البيانات فاز & # 8217؛ ر الافراج حتى الأربعاء في 9:45 صباحا إست)، علينا أن ندخل في وقت قريب من اليوم بعد اشعال النار. (بالنسبة إلى أولئك الذين يتساءلون، تستخدم إستراتيجيتي الاشتراكية هذه الآلية، مما يتيح للمشتركين وقتا كافيا للتنفيذ على مدار اليوم).
لذلك، اسمحوا & # 8217؛ s حساب لدينا باكتست العوائد. هنا & # 8217؛ s ستراتستاتس وظيفة لحساب بعض الإحصاءات ملخص.
مع النتائج التالية:
لذلك، من الواضح أن هذه كارثة. الفحص البصري سوف تظهر مدمرة، سحب متعددة السنوات. باستخدام الأمر table. Drawdowns، يمكننا عرض أسوأ منها.
لذا، فإن أعلى 3 هي رهيبة، ثم أي شيء فوق 30٪ لا تزال مروعة جدا. واستمرت بضع عمليات السحب هذه سنوات عديدة أيضا، مع طول هائل إلى الحوض الصغير. 458 يوما تداول ما يقرب من عامين، و 364 حوالي سنة ونصف السنة. تخيل رؤية استراتيجية تكون على الدوام على الجانب الخطأ للتجارة لمدة ما يقرب من عامين، وعندما يقال كل شيء وفعلت، أنت & # 8217؛ فقدت ثلاثة أرباع كل شيء في تلك الاستراتيجية.
هناك & # 8217؛ s لا السكر-- طلاء هذا: لا يمكن إلا أن يسمى هذه الاستراتيجية القمامة المطلقة.
فلنحاول إجراء تعديل واحد: نحن بحاجة إلى كل من كونتانغو (C2 & غ؛ C1)، وأن يكون كونتانغو أعلى من المتوسط المتحرك البسيط لمدة 60 يوما، على غرار إستراتيجيتي فسف / فسمت.
مع النتائج:
لذلك، فإن كالمار لا يزال بأمان أقل من 1، ومؤشر أداء قرحة لا يزال في الطابق السفلي، وهو الحد الأقصى الذي هو & # 8217؛ s طويلة منذ نقطة أن الناس سوف يكون التخلي عن الاستراتيجية، وهلم جرا.
لذلك، على الرغم من أنه تم تحسينه، فإنه لا يزال من الآمن القول أن هذه الاستراتيجية لا تعمل جيدا. حتى بعد الانسحاب الكبير 2007-2008، فإنه لا يزال يحصل على بعض الأشياء سيئة للغاية، مثل التعرض لجميع أغسطس 2017.
في حين أعتقد أن هناك تطبيقات ل كونتانغو في الاستثمار التقلبات، وأنا لا أعتقد أن استخدامه في توليد إشارة تقلب طويلة / قصيرة من تلقاء نفسها. وبدلا من ذلك، أعتقد أن المؤشرات الأخرى ومصادر البيانات تؤدي عملا أفضل من ذلك. مثل فكسف / فسمت، الذي تم تكراره منذ ذلك الحين لتشكيل استراتيجية اشتراكي.
شكرا للقراءة.
ملاحظة: أنا حاليا تسعى فرص التواصل والمشاريع طويلة الأجل، والمواقف بدوام كامل المتعلقة بمهنتي مجموعة. يمكن العثور على ملفي الشخصي هنا.
مقارنة بعض الاستراتيجيات من سهولة التقلب الاستثمار، و Table. Drawdowns الأمر.
وستكون هذه المشاركة حول مقارنة الاستراتيجيات من الورقة & # 8220؛ سهولة التقلب الاستثمار & # 8221؛، جنبا إلى جنب مع مظاهرة من R & # 8217 الصورة s. Drawdowns الأمر.
أولا، قبل أن أبعد من ذلك، في حين أعتقد أن افتراضات التنفيذ وجدت في مؤشر الضعف الاقتصادي لا تقدم استراتيجيات جيدة للتداول الفعلي الفعلي (على الرغم من أن المخاطر / المكافآت مكافأة أيضا ترك الكثير من مجال للتحسين)، وأعتقد أن هذه الاستراتيجيات كبيرة كمعايير.
لذلك، منذ بعض الوقت، لقد فعلت اختبار خارج العينة لأحد الاستراتيجيات الموجودة في مؤشر الضعف الاقتصادي، والتي يمكن العثور عليها هنا.
باستخدام نفس مصدر البيانات، كما أنني حصلت على البيانات ل سبي (على الرغم من مرة أخرى، ألفافانتيج يمكن أيضا توفير هذه الخدمة مجانا لأولئك الذين لا تستخدم كواندل).
هنا & # 8217؛ s الرمز الجديد.
لذلك، شرح: هناك أربعة تيارات عودة هنا & # 8211؛ شراء وعقد زيف، زخم دن من وظيفة سابقة، واستراتيجيتين أخرى.
أبسط واحد، ودعا فراتيو هو ببساطة نسبة فيكس على فكسف. بالقرب من الإغلاق، تحقق من هذه الكمية. إذا كان هذا هو أقل من واحد، وشراء الرابع عشر، وإلا، وشراء فس.
والآخر، ودعا استراتيجية تقلب المخاطر قسط (أو فرب قصيرة)، ويقارن التقلب التاريخي لمدة 10 يوما (أي انحراف معياري سنوي عشرة أيام) من S & P 500، يطرحه من فيكس، ويأخذ وهو المتوسط المتحرك لمدة 5 أيام لذلك. بالقرب من الإغلاق، عندما يكون الرقم أعلى من الصفر (أي، فيكس أعلى من التقلب التاريخي)، انتقل إلى الرابع عشر، وإلا، فانتقل طويلا.
مرة أخرى، كل هذه الاستراتيجيات هي على نحو فعال & # 8220؛ مراقبة بالقرب / في الإغلاق، وشراء في نهاية & # 8221؛، لذلك هي مفيدة لأغراض العرض التوضيحي، وإن لم يكن لأغراض التنفيذ على أي حساب كبير دون تكبد تأثير السوق.
في ما يلي النتائج، منذ عام 2018 (أي وقت الإنشاء الفعلي الرابع عشر والرابع):
تجدر الإشارة إلى أن كلا من الزخم واستراتيجية "فرب" لا تحققان أداء أفضل من الشراء والعقد الرابع عشر منذ عام 2018. ومن ناحية أخرى، فإن استراتيجية فراتيو تتفوق على الأداء.
إليك وظيفة الإحصاءات الموجزة التي تجمع بعض مقاييس الأداء الأعلى مستوى.
تجدر الإشارة إلى أن جميع الاستراتيجيات المعيارية قد عانت من عمليات سحب كبيرة جدا منذ تأسيس زيف & # 8217؛ حيث يمكننا فحصها باستخدام الأمر table. Drawdowns كما هو موضح أدناه:
لاحظ أن الأمر table. Drawdowns يقوم بفحص تدفق عودة واحد فقط في المرة الواحدة. وعلاوة على ذلك، تحدد الحجة العليا عدد عمليات السحب التي يجب النظر إليها، مرتبة حسب أكبر سحب أولا.
أحد الأسباب التي أعتقد أن هذه الاستراتيجيات يبدو أنها تعاني من عمليات السحب التي تقوم بها هي أنها '' إما أن كل شيء في الأصول واحد، أو العكس تماما، مع عدم وجود مجال للخطأ.
آخر شيء، للفضول، وهنا هي المقارنة مع استراتيجيتي منذ عام 2018 (أساسا الرابع عشر التأسيس) قياسا على الاستراتيجيات في مؤشر الضعف الاقتصادي (الذي كنت تتداول مع رأس المال الحي منذ سبتمبر، وفتحت مؤخرا خدمة الاشتراك ل):
شكرا للقراءة.
ملاحظة: أبحث حاليا عن الشبكات والفرص بدوام كامل المتعلقة بمهارة مجموعة. ملفي الشخصي ينكدين يمكن العثور عليها هنا.
إطلاق خدمة الاشتراك.
بعد قياس الاهتمام من قرائي، قررت & # 8217؛ فتح خدمة اشتراك. أنا & # 8217؛ ليرة لبنانية نسخ ولصق الأسئلة الشائعة، أو أفضل محاولة لي في محاولة للرد على العديد من الأسئلة ممكن في وقت مبكر، ويمكن أن يجيب أكثر في المستقبل.
أنا اختيار لاستخدام باترون فقط للاستعانة بمصادر خارجية كل من التقنية من التعامل مع الاشتراكات وخلق مصدر مركزي لنشر المحتوى القائم على الاشتراك.
الأسئلة الشائعة (التي تم نسخها من صفحة الاشتراك):
شكرا لزيارتك. بعد قياس الفائدة من قراءي على موقعي الرئيسي (quantstrattrader. wordpress)، قمت بإنشاء هذا كصفحة اشتراك لاستراتيجيات الاستثمار الكمية، بهدف تحويل المشتركين إلى نقدهم إلى المزيد من النقود، صافي رسوم الاشتراك (نأمل). إن الأنظمة التي أطورها تأتي من خلفية التعلم من المهنيين ذوي الخبرة في التداول الكمي، وكبار الباحثين في الشركات الكبيرة. النظام الحالي نشرت في البداية نموذجا أوليا لعدة سنوات مرة أخرى وشاهدت يتم تعقبها، قبل أن تبدأ أخيرا لنشر رأسمالي الخاص في وقت سابق من هذا العام، وإجراء أحدث التعديلات حتى الآن.
وفي حين أن الأداء السابق لا يضمن النتائج المستقبلية والماضي لا يعيد نفسه، فإنه غالبا ما القوافي، لذلك دعونا تحويل الأموال إلى المزيد من المال.
بعض الأسئلة الشائعة حول الاستراتيجية:
ما هو سعر الاشتراك لهذه الاستراتيجية؟
حاليا، بعد قياس الاهتمام من القراء وإجراء البحوث استنادا إلى مواقع أخرى، التسعير المؤقت هو 50 $ / الشهر. وبما أن هذه الاستراتيجية تبني سجلا حافلا، قد يكون ذلك عرضة للتغيير في المستقبل، وسيتم إجراء الإخطارات في مثل هذه الحالة.
ما هو وصف الاستراتيجية؟
الاستراتيجية هي أساسا نظام التقلب القصير الذي يتداول الرابع عشر، زيف، و فس. وبقدر ما تذهب استراتيجيات التقلب، فإنه متحفظ إلى حد ما في أنه يستخدم العديد من الشيكات المختلفة من أجل ضمان الموقف.
ما هي إستراتيجية & # 8217؛ s حافة؟
في کلمتین: إدارة المخاطر. أساسا، هناك عدد قليل من المعايير المنفصلة لاختيار الاستثمار، والنظام يقضي وقتا غير هام مع عدم التعرض عندما توفر بعض هذه المعايير إشارات متناقضة. وعلاوة على ذلك، يستخدم النظام منهجيات منضبطة في بنائه من أجل تجنب المعلمات الحرة غير الضرورية، والحفاظ على استراتيجية كما شاذة قدر الإمكان.
هل تتداول رأس المال الخاص بك مع هذه الاستراتيجية؟
متى كانت فترة التدريب داخل العينة لهذا النظام؟
الموقع الذي لم يعد يقوم بتحديث مدونته (تقلب بسيط) مرة واحدة تتبع استراتيجية أكثر بدائية التي كتبت قبل عدة سنوات. وقد سرت بشكل خاص بنتائج هذا الفحص، وقد تلقيت مؤخرا مدخلات لتحسين نظامي بدرجة أكبر بكثير، وكذلك اكتسبت الثقة لاستثمار رأس المال المباشر في ذلك.
كم عدد الصفقات التي يقوم بها النظام سنويا؟
في الاختبار الخلفي من 20 أبريل 2008 حتى نهاية 2018، قام النظام بإجراء 187 معاملة في الرابع عشر (شراء وبيع)، 160 في زيف، و 52 في فس. وهذا يعني على مدى 9 سنوات تقريبا، كان هناك في المتوسط 43 معاملة في السنة. في بعض الحالات، قد يكون هذا ببساطة التحول من الرابع عشر إلى زيف أو العكس بالعكس. وبعبارة أخرى، تقترب الصفقات من أسبوع تقريبا (قد يكون بعضها أطول، وبعضها أقصر).
متى سيتم نشر الإشارات؟
سيتم نشر الإشارات في وقت ما بين الساعة 12 ظهرا و إغلاق السوق (4 مساء بتوقيت شرق الولايات المتحدة). في باكتستينغ، يتم اختبارها كما السوق على أوامر وثيقة، حتى الأفراد تحمل أي خطر / مكافأة من قبل التنفيذ في وقت سابق.
كم مرة يكون هذا النظام في السوق؟
حوالي 56٪. ومع ذلك، على مدى باكتستينغ (والتداول الحي)، فقط حوالي 9٪ من أشهر لديها صفر العودة.
ما هي توزيع الأرباح الفائتة والخاسرة والصفر؟
وفي أواخر أكتوبر 2017، كان هناك نحو 65٪ من أشهر الفوز (بمتوسط ربح 12.8٪)، وخسارة 26٪ أشهر (بمتوسط خسارة 4.9٪)، و 9٪ صفر شهر.
ما هي بعض الإحصاءات الأخرى حول الاستراتيجية؟
منذ عام 2018 (في الوقت الذي جاء فيه رسميا الرابع عشر في البداية بدلا من استخدام البيانات الاصطناعية)، وقد تفاخر استراتيجية العائد السنوي 82٪، مع سحب 24.8٪ كحد أقصى والانحراف المعياري السنوي 35٪. وهذا يعني أن نسبة شارب (العودة إلى الانحراف المعياري) أعلى من 2، ونسبة كالمار أعلى من 3. كما أن لديها مؤشر أداء قرحة من 10.
ما هي اسوأ اساليب السحب؟
منذ عام 2018 (مرة أخرى، في وقت قريب من الرابع عشر & # 8217؛ ق)، كان أكبر سحب 24.8٪، ابتداء من 31 أكتوبر 2018، وجعل الأسهم الجديدة عالية في 12 يناير 2018. بدأ أطول سحب يوم 21 أغسطس ، 2018 واستردت في 10 أبريل 2018، واستمرت لمدة 160 يوما تداول.
هل سيتغير سعر الاشتراك في المستقبل؟
إذا استمرت الاستراتيجية في تحقيق عوائد قوية، فقد يكون هناك سبب لزيادة السعر طالما أن العائدات تحمله.
ھل یمکن تقدیم إشارة خطر متحفظة لأولئك الذین قد لا یکونون قادرین علی تحمل انخفاض بنسبة 25٪؟
وهناك استمارة مختلفة للاستراتيجية التي تستهدف نحو نصف الانحراف المعياري السنوي للاستراتيجية تفتخر بعائد سنوي بنسبة 40٪ لنحو 12٪ من الانخفاض منذ عام 2018. وبوجه عام، فإن هذه المكافأة أعلى قليلا في إحصاءات المخاطر، ولكن على حساب خفض العائدات الإجمالية في النصف.
هل يمكن أن يكون ل زيف حدثا لإنهاء الخدمة؟
هذا يشير إلى فكرة إيفن إتن تنتهي إذا كان يفقد 80٪ من قيمته في يوم واحد. لإعطاء فكرة عن احتمال هذا الحدث، باستخدام البيانات الاصطناعية، و 14 إتن كان سحب هائل من 92٪ على مدى الأزمة المالية عام 2008. ولتاريخ تلك البيانات التركيبية (قبل التأسيس) والمعطيات المحققة (ما بعد الإنشاء)، كان اليوم الأسوأ المطلق يوم انخفاض بنسبة 26.8٪. وتجدر الإشارة إلى أن الاستراتيجية لم تكن في الرابع عشر خلال ذلك اليوم.
ما هي الاستراتيجية وأسوأ يوم؟
في 16 سبتمبر 2018، فقدت الاستراتيجية 16٪ في يوم واحد. وكان هذا في نهاية الذيل من امتداد الأيام الإيجابية التي جعلت حوالي 40٪.
ما هي مخاطر الإستراتيجية؟
والمخاطرة الأولى هي أنه بالنظر إلى أن هذه الاستراتيجية منحازة بشكل طبيعي نحو التقلبات القصيرة، فإنها يمكن أن تكون لها بعض التخفيضات الحادة بسبب طبيعة ارتفاع التقلب. والمخاطر الأخرى هي أنه بالنظر إلى أن هذه الاستراتيجية تنفق أحيانا وقتها في زيف، فإنها ستضعف أداءها الرابع عشر في بعض الأيام الجيدة. وهذا الخطر الثاني هو نتيجة لطبقات إضافية من إدارة المخاطر في الاستراتيجية.
ما مدى تعقيد هذه الاستراتيجية؟
ليس بشكل مفرط. انها & # 8217؛ ق فقط أكثر تعقيدا قليلا من استراتيجية الزخم الأساسية عند عد المعلمات الحرة، ويمكن تفسيرها في بضع دقائق.
هل تستخدم هذه الإستراتيجية أي منهجيات معقدة للتعلم الآلي؟
لا. متطلبات البيانات لهذه الخوارزميات والضجيج في العالم المالي تجعل من الخطورة جدا لتطبيق هذه المنهجيات، والبحوث حتى الآن لم تؤتي ثمارها لتبرير دمجها.
هل سيكون حجم الأداة مصدر قلق (خاصة زيف)؟
وفقا لشخص واحد الذي عمل على إنشاء الأصلي فس إتن (وبالتالي، عكسها، الرابع عشر)، يمكن إنشاء مصدر جديد من إتنس من قبل المصدر (في زيف & # 8217؛ ق القضية، كريدي سويس) عند الطلب. وباختصار، فإن القلق من الحجم هو أكثر من مصدر قلق سمعة الشخص الذي يقدم الطلب. وبعبارة أخرى، يعتمد ذلك على مدى نجاح الاستراتيجية.
هل يمكن أن تكون الاستراتيجية مسؤولة / مسؤولة / مسؤولة عن خسارة المشترك / سحبه؟
اسمحوا هذا بمثابة إخلاء المسؤولية: عن طريق الاشتراك، فإنك توافق على التنازل عن أي مطالبة قانونية ضد الاستراتيجية، أو الخالق (ق) في حالة السحب، والخسائر، وما إلى ذلك الاشتراك هو لعرض إخراج البرنامج، و هذه الخدمة لا تدير بنشاط فلسا واحدا من المشتركين & # 8217؛ الأصول الفعلية. يمكن للمشتركين اختيار تجاهل إشارات الاستراتيجية في لحظة إشعار وفقا لتقديرهم. لا ينبغي اعتبار مخرجات البرنامج من النصائح الاستثمارية الواردة من برنامج سفب و كفا و ريا وما إلى ذلك.
لماذا يجب الوثوق بهذه الإشارات؟
لأن عملي على مواضيع أخرى كان على كامل، العرض العام لعدة سنوات. خلافا لمواقع الويب الأخرى، لقد أظهرت & # 8220؛ باكستس سيئة & # 8221؛، وبالتالي كسر القول المأثور من & # 8220؛ أنت & # 8217؛ لن نرى سيئة باكتست & # 8221 ؛. لقد أظهرت دقة في بحثي، وقد تم تطبيق نفس الدقة تجاه هذا النظام أيضا. حتى يكون هناك سجل أطول من ذلك أن النظام يمكن أن تقف من تلقاء نفسها، والثقة في النظام هو الثقة في منشئ النظام.
من هو الجمهور المستهدف لهذه الإشارات؟
الجمهور المستهدف هو الأفراد، المستثمرين الأفراد مع تحمل المخاطر معينة، وبأسعار وفقا لذلك.
هل تقلب الاستثمار خطير جدا؟
انها مخاطرة من وجهة نظر الأداة الأساسية لديها القدرة على تحقيق سحب كبيرة جدا (أكبر من 60٪، وأكبر من 90٪). ومع ذلك، من وجهة نظر رقمية بحتة، فإن الشركة التي استولت على الكثير من التسوق، والأمازون، منذ إنشائها كان معدل العائد السنوي 37.1٪، وانحراف معياري من 61.5٪، أسوأ تراجع من 94٪، ومؤشر أداء قرحة من 0.9. وعلى سبيل المقارنة، كان معدل الدخل السنوي الرابع عشر من عام 2008 (باستخدام البيانات الاصطناعية) قد حقق معدل عائد سنوي قدره 35.5٪، وانحراف معياري بنسبة 57.7٪، وأسوأ تراجع بنسبة 92٪، ومؤشر أداء قرحة 0.6. إذا كان الأمازون يعتبر أرفع الأصول، ثم من المقارنة الكمية، نظام ينظر إلى الاستفادة من الرهانات تقلب ينبغي النظر إليها من منظور مماثل. ومن المؤكد أن أداء الاستراتيجية يتفوق بشكل كبير على أداء الشراء الرابع عشر (الذي لا ينبغي لأحد القيام به). ومع ذلك، فإن فلسفة منتجات التقلب تكون أكثر خطورة بكثير من الأسماء التقنية المنزلية فقط لا يصدق إلا إذا كان المستقبل يختلف بعنف عن الماضي.
هل هناك إمكانية للتعاون مع منشئي استراتيجيات آخرين؟
لا تتردد في الاتصال بي في البريد الإلكتروني ilya. kipnis@gmail لمناقشة هذا الاحتمال. أطلب تدفق يومي من العوائد قبل البدء في أي مناقشة.
لأن الماضي كل من الفنسي حرفية نافذة خلع الملابس واختيار مثيرة للاهتمام من المفردات، باترون هو ببساطة منصة التي تعالج المدفوعات ويخلق منصة مركزية من خلالها لنشر المحتوى القائم على الاشتراك، بدلا من الحفاظ على القوائم البريدية وغيرها من الصداع التقني. أساسا، انها مجرد وسيلة للاستعانة بمصادر خارجية النهاية الفنية لإدارة الأعمال التجارية، حتى لو كان خلع الملابس نافذة غير تقليدية بعض الشيء.
شكرا للقراءة.
ملاحظة: أنا مهتم حاليا في الشبكات والأدوار بدوام كامل على أساس مهاراتي. ملفي الشخصي ينكدين يمكن العثور عليها هنا.
عودة البيانات الحرة والتقلب المحتمل تداول التداول.
وستكون هذه المشاركة حول سحب البيانات المجانية من ألفافانتيج، وقياس الفائدة لخدمة الاشتراك في التداول المتقلب.
لذلك أولا، منذ أن ياهوس في ياهو قررت إيقاف البيانات المجانية، وكان العالم من البيانات اليومية المجانية في بعض الشيء من العصر الداكن. حسنا، بفضل blog. fosstrading / 2017/10 / جيتسيمبولز-أند-ألفا-vantage. html # غلوسكومنتسجوش أولريش، بول تيتور، وغيرهم من الأفراد R / فينانس، أحدث إصدار من كوانتمود (التي يمكن تثبيتها من كران) يحتوي الآن وسيلة للحصول على بيانات مالية مجانية من ألفافانتاج منذ عام 2000، وهو ما يكفي عادة لمعظم باكتيستس، وذلك التاريخ قبل بداية معظم صناديق الاستثمار المتداولة.
هنا & # 8217؛ ق كيفية القيام بذلك.
بمجرد القيام بذلك، تحميل البيانات هو بسيط، إن لم يكن بطيئا بعض الشيء. هنا & # 8217؛ ق كيفية القيام بذلك.
والنتائج:
وهو ما يعني إذا كان أي واحد من بلدي المشاركات القديمة على تخصيص الأصول قد توقف إلى حد ما بفضل البيانات ياهو سيئة، وسوف تعمل الآن مرة أخرى مع تعديل طفيف لخوارزميات إدخال البيانات.
أبعد من إظهار هذا الروتين، شيء واحد آخر أنا & # 8217؛ د ترغب في القيام به هو لقياس الاهتمام لخدمة الاشتراك إشارة تقلب، لنظام لقد بدأت شخصيا تداول قبل بضعة أشهر.
ببساطة، لقد رأيت مواقع أخرى مع خدمات الاشتراك مع أسوأ المخاطر / مكافأة من استراتيجية أنا التجارة حاليا، الذي يتحول بين زيف، زيف، و فس. حاليا، منحنى الأسهم، في لوغ 10، يشبه هذا:
وهذا يعني أن 1000 دولار في عام 2008 سيصبح حوالي 000 000 1 دولار اليوم، إذا تمكن أحدهم من تداول هذه الاستراتيجية منذ ذلك الحين.
منذ عام 2018 (حوالي وقت التأسيس الرابع عشر)، كان الأداء:
وبالنظر إلى أن بعض مواقع الويب هناك تتقاضى ما يزيد عن 50 دولارا في الشهر إما لاستراتيجية تناوب الأصول التكتيكية واحدة (وأكثر من ذلك بكثير للجمع) مع ملامح أدنى المخاطر / العائد، أو استراتيجية التقلب التي قد يكون لها ضخمة وتاريخيا، وكسر الانهيار، وكنت آمل أن قياس نقطة سعر ما القراء النظر في دفع ثمن إشارات من استراتيجية أفضل من تلك.
شكرا للقراءة.
ملاحظة: أنا مهتم حاليا في الشبكات و أسعى بدوام كامل الفرص المتعلقة بمهارة مجموعة. ملفي الشخصي ينكدين يمكن العثور عليها هنا.
معيار كيلي & # 8212؛ هل يعمل؟
سيكون هذا المنصب حول تنفيذ والتحقيق في تشغيل كيلي المعيار & # 8212؛ وهذا هو، معيار كيلي تعديل باستمرار أن يتغير كاستراتيجية يحقق عوائد.
بالنسبة لأولئك الذين ليسوا على دراية بمعيار كيلي، انها فكرة تعديل حجم الرهان لتحقيق أقصى قدر من استراتيجية النمو على المدى الطويل. كلا هتبس: //en. wikipedia/wiki/Kelly_criterionWikipedia و إنفستوبيديا إدخالات على معيار كيلي. أساسا، انها حول تعظيم توقعاتك على المدى الطويل من نظام الرهان، من خلال تحجيم الرهانات أعلى عندما الحافة أعلى، والعكس بالعكس.
هناك نوعان من الصيغ لمعيار كيلي: ويكيبيديا نتيجة يعرض عليه كما يعني على سيجما تربيع. تعريف إنفستوبيديا هو P - [(1-P) / وينلوسراتيو]، حيث P هو احتمال الفوز الرهان، و وينلوسراتيو هو متوسط الفوز على متوسط الخسارة.
في أي حال، وهنا هما التنفيذ.
دع & # 8217؛ s محاولة هذا مع بعض البيانات. في هذه المرحلة من الوقت، سأظهر إستراتيجية تقلب غير قابلة للتكرار التي أتداولها حاليا.
للسجل، وهنا إحصاءاتها:
الآن، دع & # 8217؛ انظر ما يفعله نسخة ويكيبيديا:
والنتائج هي ببساطة سخيفة. And here would be why: say you have a mean return of .0005 per day (5 bps/day), and a standard deviation equal to that (that is, a Sharpe ratio of 1). You would have 1/.0005 = 2000. In other words, a leverage of 2000 times. This clearly makes no sense.
The other variant is the more particular Investopedia definition.
Looks a bit more reasonable. However, how does it stack up against not using it at all?
Turns out, the fabled Kelly Criterion doesn’t really change things all that much.
For the record, here are the statistical comparisons:
شكرا للقراءة.
NOTE: I am currently looking for my next full-time opportunity, preferably in New York City or Philadelphia relating to the skills I have demonstrated on this blog. My LinkedIn profile can be found here. If you know of such opportunities, do not hesitate to reach out to me.
Leverage Up When You’re Down?
This post will investigate the idea of reducing leverage when drawdowns are small, and increasing leverage as losses accumulate. It’s based on the idea that whatever goes up must come down, and whatever comes down generally goes back up.
I originally came across this idea from this blog post.
So, first off, let’s write an easy function that allows replication of this idea. Essentially, we have several arguments:
One: the default leverage (that is, when your drawdown is zero, what’s your exposure)? For reference, in the original post, it’s 10%.
Next: the various leverage levels. In the original post, the leverage levels are 25%, 50%, and 100%.
And lastly, we need the corresponding thresholds at which to apply those leverage levels. In the original post, those levels are 20%, 40%, and 55%.
So, now we can create a function to implement that in R. The idea being that we have R compute the drawdowns, and then use that information to determine leverage levels as precisely and frequently as possible.
Here’s a quick piece of code to do so:
So, let’s replicate some results.
And our results look something like this:
That said, what would happen if one were to extend the data for all available XIV data?
A different story.
In this case, I think the takeaway is that such a mechanism does well when the drawdowns for the benchmark in question occur sharply, so that the lower exposure protects from those sharp drawdowns, and then the benchmark spends much of the time in a recovery mode, so that the increased exposure has time to earn outsized returns, and then draws down again. When the benchmark continues to see drawdowns after maximum leverage is reached, or continues to perform well when not in drawdown, such a mechanism falls behind quickly.
As always, there is no free lunch when it comes to drawdowns, as trying to lower exposure in preparation for a correction will necessarily mean forfeiting a painful amount of upside in the good times, at least as presented in the original post.
شكرا للقراءة.
NOTE: I am currently looking for my next full-time opportunity, preferably in New York City or Philadelphia relating to the skills I have demonstrated on this blog. My LinkedIn profile can be found here. If you know of such opportunities, do not hesitate to reach out to me.
Let’s Talk Drawdowns (And Affiliates)
This post will be directed towards those newer in investing, with an explanation of drawdowns–in my opinion, a simple and highly important risk statistic.
Would you invest in this?
As it turns out, millions of people do, and did. That is the S&P 500, from 2000 through 2018, more colloquially referred to as “the stock market”. Plenty of people around the world invest in it, and for a risk to reward payoff that is very bad, in my opinion. This is an investment that, in ten years, lost half of its value–twice!
At its simplest, an investment–placing your money in an asset like a stock, a savings account, and so on, instead of spending it, has two things you need to look at.
First, what’s your reward? If you open up a bank CD, you might be fortunate to get 3%. If you invest it in the stock market, you might get 8% per year (on average) if you held it for 20 years. In other words, you stow away $100 on January 1st, and you might come back and find $108 in your account on December 31st. This is often called the compound annualized growth rate (CAGR)–meaning that if you have $100 one year, earn 8%, you have 108, and then earn 8% on that, and so on.
The second thing to look at is the risk. What can you lose? The simplest answer to this is “the maximum drawdown”. If this sounds complicated, it simply means “the biggest loss”. So, if you had $100 one month, $120 next month, and $90 the month after that, your maximum drawdown (that is, your maximum loss) would be 1 – 90/120 = 25%.
When you put the reward and risk together, you can create a ratio, to see how your rewards and risks line up. This is called a Calmar ratio, and you get it by dividing your CAGR by your maximum drawdown. The Calmar Ratio is a ratio that I interpret as “for every dollar you lose in your investment’s worst performance, how many dollars can you make back in a year?” For my own investments, I prefer this number to be at least 1, and know of a strategy for which that number is above 2 since 2018, or higher than 3 if simulated back to 2008.
Most stocks don’t even have a Calmar ratio of 1, which means that on average, an investment makes more than it can possibly lose in a year. Even Amazon, the company whose stock made Jeff Bezos now the richest man in the world, only has a Calmar Ratio of less than 2/5, with a maximum loss of more than 90% in the dot-com crash. The S&P 500, again, “the stock market”, since 1993, has a Calmar Ratio of around 1/6. That is, the worst losses can take *years* to make back.
A lot of wealth advisers like to say that they recommend a large holding of stocks for young people. In my opinion, whether you’re young or old, losing half of everything hurts, and there are much better ways to make money than to simply buy and hold a collection of stocks.
For those with coding skills, one way to gauge just how good or bad an investment is, is this:
An investment has a history–that is, in January, it made 3%, in February, it lost 2%, in March, it made 5%, and so on. By shuffling that history around, so that say, January loses 2%, February makes 5%, and March makes 3%, you can create an alternate history of the investment. It will start and end in the same place, but the journey will be different. For investments that have existed for a few years, it is possible to create many different histories, and compare the Calmar ratio of the original investment to its shuffled “alternate histories”. Ideally, you want the investment to be ranked among the highest possible ways to have made the money it did.
To put it simply: would you rather fall one inch a thousand times, or fall a thousand inches once? Well, the first one is no different than jumping rope. The second one will kill you.
Here is some code I wrote in R (if you don’t code in R, don’t worry) to see just how the S&P 500 (the stock market) did compared to how it could have done.
This is the resulting plot:
That red line is the actual performance of the S&P 500 compared to what could have been. And of the 1000 different simulations, only 91 did worse than what happened in reality.
This means that the stock market isn’t a particularly good investment, and that you can do much better using tactical asset allocation strategies.
One site I’m affiliated with, is AllocateSmartly. It is a cheap investment subscription service ($30 a month) that compiles a collection of asset allocation strategies that perform better than many wealth advisers. When you combine some of those strategies, the performance is better still. To put it into perspective, one model strategy I’ve come up with has this performance:
In this case, the compound annualized growth rate is nearly double that of the maximum loss. For those interested in something a bit more aggressive, this strategy ensemble uses some fairly conservative strategies in its approach.
In conclusion, when considering how to invest your money, keep in mind both the reward, and the risk. One very simple and important way to understand risk is how much an investment can possibly lose, from its highest, to its lowest value following that peak. When you combine the reward and the risk, you can get a ratio that tells you about how much you can stand to make for every dollar lost in an investment’s worst performance.
شكرا للقراءة.
NOTE: I am interested in networking opportunities, projects, and full-time positions related to my skill set. If you are looking to collaborate, please contact me on my LinkedIn here.
An Out of Sample Update on DDN’s Volatility Momentum Trading Strategy and Beta Convexity.
The first part of this post is a quick update on Tony Cooper’s of Double Digit Numerics’s volatility ETN momentum strategy from the volatility made simple blog (which has stopped updating as of a year and a half ago). The second part will cover Dr. Jonathan Kinlay’s Beta Convexity concept.
So, now that I have the ability to generate a term structure and constant expiry contracts, I decided to revisit some of the strategies on Volatility Made Simple and see if any of them are any good (long story short: all of the publicly detailed ones aren’t so hot besides mine–they either have a massive drawdown in-sample around the time of the crisis, or a massive drawdown out-of-sample).
Why this strategy? Because it seemed different from most of the usual term structure ratio trades (of which mine is an example), so I thought I’d check out how it did since its first publishing date, and because it’s rather easy to understand.
Here’s the strategy:
Take XIV, VXX, ZIV, VXZ, and SHY (this last one as the “risk free” asset), and at the close, invest in whichever has had the highest 83 day momentum (this was the result of optimization done on volatilityMadeSimple).
Here’s the code to do this in R, using the Quandl EOD database. There are two variants tested–observe the close, buy the close (AKA magical thinking), and observe the close, buy tomorrow’s close.
إليك النتائج.
Looks like this strategy didn’t pan out too well. Just a daily reminder that if you’re using fine grid-search to select a particularly good parameter (EG n = 83 days? Maybe 4 21-day trading months, but even that would have been n = 82), you’re asking for a visit from, in the words of Mr. Tony Cooper, a visit from the grim reaper.
Moving onto another topic, whenever Dr. Jonathan Kinlay posts something that I think I can replicate that I’d be very wise to do so, as he is a very skilled and experienced practitioner (and also includes me on his blogroll).
A topic that Dr. Kinlay covered is the idea of beta convexity–namely, that an asset’s beta to a benchmark may be different when the benchmark is up as compared to when it’s down. Essentially, it’s the idea that we want to weed out firms that are what I’d deem as “losers in disguise”–I. E. those that act fine when times are good (which is when we really don’t care about diversification, since everything is going up anyway), but do nothing during bad times.
The beta convexity is calculated quite simply: it’s the beta of an asset to a benchmark when the benchmark has a positive return, minus the beta of an asset to a benchmark when the benchmark has a negative return, then squaring the difference. That is, (beta_bench_positive – beta_bench_negative) ^ 2.
Here’s some R code to demonstrate this, using IBM vs. the S&P 500 since 1995.
شكرا للقراءة.
NOTE: I am always looking to network, and am currently actively looking for full-time opportunities which may benefit from my skill set. If you have a position which may benefit from my skills, do not hesitate to reach out to me. My LinkedIn profile can be found here.
Testing the Hierarchical Risk Parity algorithm.
This post will be a modified backtest of the Adaptive Asset Allocation backtest from AllocateSmartly, using the Hierarchical Risk Parity algorithm from last post, because Adam Butler was eager to see my results. On a whole, as Adam Butler had told me he had seen, HRP does not generate outperformance when applied to a small, carefully-constructed, diversified-by-selection universe of asset classes, as opposed to a universe of hundreds or even several thousand assets, where its theoretically superior properties result in it being a superior algorithm.
First off, I would like to thank one Matthew Barry, for helping me modify my HRP algorithm so as to not use the global environment for recursion. You can find his github here.
Here is the modified HRP code.
With covMat and corMat being from the last post. In fact, this function can be further modified by encapsulating the clustering order within the getRecBipart function, but in the interest of keeping the code as similar to Marcos Lopez de Prado’s code as I could, I’ll leave this here.
Anyhow, the backtest will follow. One thing I will mention is that I’m using Quandl’s EOD database, as Yahoo has really screwed up their financial database (I. E. some sector SPDRs have broken data, dividends not adjusted, etc.). While this database is a $50/month subscription, I believe free users can access it up to 150 times in 60 days, so that should be enough to run backtests from this blog, so long as you save your downloaded time series for later use by using write. zoo.
This code needs the tseries library for the portfolio. optim function for the minimum variance portfolio (Dr. Kris Boudt has a course on this at datacamp), and the other standard packages.
A helper function for this backtest (and really, any other momentum rotation backtest) is the appendMissingAssets function, which simply adds on assets not selected to the final weighting and re-orders the weights by the original ordering.
Next, we make the call to Quandl to get our data.
While Josh Ulrich fixed quantmod to actually get Yahoo data after Yahoo broke the API, the problem is that the Yahoo data is now garbage as well, and I’m not sure how much Josh Ulrich can do about that. I really hope some other provider can step up and provide free, usable EOD data so that I don’t have to worry about readers not being able to replicate the backtest, as my policy for this blog is that readers should be able to replicate the backtests so they don’t just nod and take my word for it. If you are or know of such a provider, please leave a comment so that I can let the blog readers know all about you.
Next, we initialize the settings for the backtest.
While the AAA backtest actually uses a 126 day lookback instead of a 6 month lookback, as it trades at the end of every month, that’s effectively a 6 month lookback, give or take a few days out of 126, but the code is less complex this way.
Next, we have our actual backtest.
In a few sentences, this is what happens:
The algorithm takes a subset of the returns (the past six months at every month), and computes absolute momentum. It then ranks the ten absolute momentum calculations, and selects the intersection of the top 5, and those with a return greater than zero (so, a dual momentum calculation).
If no assets qualify, the algorithm invests in nothing. If there’s only one asset that qualifies, the algorithm invests in that one asset. If there are two or more qualifying assets, the algorithm computes a covariance matrix using 20 day volatility multiplied with a 126 day correlation matrix (that is, sd_20′ %*% sd_20 * (elementwise) cor_126. It then computes normalized inverse volatility weights using the volatility from the past 20 days, a minimum variance portfolio with the portfolio. optim function, and lastly, the hierarchical risk parity weights using the HRP code above from Marcos Lopez de Prado’s paper.
Lastly, the program puts together all of the weights, and adds a cash investment for any period without any investments.
في ما يلي النتائج:
In short, in the context of a small, carefully-selected and allegedly diversified (I’ll let Adam Butler speak for that one) universe dominated by the process of which assets to invest in as opposed to how much, the theoretical upsides of an algorithm which simultaneously exploits a covariance structure without needing to invert a covariance matrix can be lost.
However, this test (albeit from 2007 onwards, thanks to ETF inception dates combined with lookback burn-in) confirms what Adam Butler himself told me, which is that HRP hasn’t impressed him, and from this backtest, I can see why. However, in the context of dual momentum rank selection, I’m not convinced that any weighting scheme will realize much better performance than any other.
شكرا للقراءة.
NOTE: I am always interested in networking and hearing about full-time opportunities related to my skill set. My linkedIn profile can be found here.
FOMC Cycle Trading Strategy in Quantstrat.
Another hotly anticipated FOMC meeting kicks off next week, so I thought it would be timely to highlight a less well-known working paper, “Stock Returns over the FOMC Cycle”, by Cieslak, Morse and Vissing-Jorgensen (current draft June 2018). Its main result is:
Over the last 20 years, the average excess return on stocks over Treasury bills follows a bi-weekly pattern over the Federal Open Market Committee meeting cycle. The equity premium over this 20-year period was earned entirely in weeks 0, 2, 4 and 6 in FOMC cycle time, with week 0 starting the day before a scheduled FOMC announcement day.
In this post, we’ll look to recreate their cycle pattern and then backtest a trading strategy to test the claim of economic significance. Another objective is to evaluate the R package Quantstrat “for constructing trading systems and simulation.”
Although the authors used 20 years of excess return data from 1994 to 2018, instead we’ll use S&P500 ETF (SPY) data from 1994 to March 2018 and the FOMC dates (from my previous post here returnandrisk/2018/01/fomc-dates-full-history-web-scrape. html).
As there is not a lot of out-of-sample data since the release of the paper in 2018, we’ll use all the data to detect the pattern, and then proceed to check the impact of transaction costs on the economic significance of one possible FOMC cycle trading strategy.
FOMC Cycle Pattern.
The chart and table below clearly show the bi-weekly pattern over the FOMC Cycle of Cieslak et al in SPY 5-day returns. This is based on calendar weekdays (i. e. day count includes holidays), with week 0 starting one day before a scheduled FOMC announcement day (i. e. on day -1). Returns in even weeks (weeks 0, 2, 4, 6) are positive, while those in odd weeks (weeks -1, 1, 3, 5) are lower and mostly slightly negative.
Table of Returns by FOMC Week, Days & Phase.
Economic Significance: FOMC Cycle Trading Strategy Using Quantstrat.
In this section, we’ll create a trading strategy using the R Quantstrat package to test the claim of economic significance of the pattern. Note, Quantstrat is “still in heavy development” and as such is not available on CRAN but needs to be downloaded from the development web site. Nonetheless, it’s been around for some time and it should be up to the backtesting task…
Based on the paper’s main result and our table above confirming the up-phase is more profitable, we’ll backtest a long only strategy that buys the SPY on even weeks (weeks 0, 2, 4, 6) and holds for 5 calendar days only, and compare it to a buy and hold strategy. In addition, we’ll look at the effect of transaction costs on overall returns.
هناك عدد قليل من الأشياء ملاحظة:
We’ll use a bet size of 100% of equity for all trades. This may not be optimal in developing trading systems but will allow for easy comparison with the buy and hold passive strategy, which is 100% allocated Assume 5 basis points (0.05%) in execution costs (including commission and slippage), and initial equity of $100,000 Execution occurs on the close of the same day that the buy/sell signal happens. Unfortunately, Quantstrat does not allow this out-of-the-box, so we need to do a hack – a custom indicator function that shifts the signals forward in time (see “get. fomc. cycle” function above)
The following are the resulting performance metrics for the trading strategy, using 5 basis points for transaction costs, and comparisons with the passive buy and hold strategy (before and after transaction costs).
Summary Performance for Trading Strategy.
إحصاءات التجارة.
Monthly Returns.
Summary Performance for Benchmark Buy and Hold Strategy.
Comparison of Trading Strategy with Buy and Hold (BEFORE transaction costs)
Comparison of Trading Strategy with Buy and Hold (AFTER transaction costs)
استنتاج.
FOMC Cycle Pattern.
We were able to clearly see the bi-weekly pattern over the FOMC cycle using SPY data, a la Cieslak, Morse and Vissing-Jorgensen.
Economic Significance: FOMC Cycle Trading Strategy.
Before transaction costs, we were able to reproduce similar results to the paper, with the long only strategy of buying the SPY in even weeks and holding for 5 days. In our case, this strategy added about 2% p. a. to buy and hold returns, reduced volatility by 30% and increased the Sharpe ratio by 70% to 0.82 (from 0.47).
However, after allowing for a reasonable 5 basis points (0.05%) in execution costs, annualized returns fall below that of the buy and hold strategy (9.15%) to 8.55%. As volatility remains lower, this means the risk-adjusted performance is better by only 30% now (Sharpe ratio of 0.62). See table below for details.
Execution costs (brokerage and slippage) can have a material impact on trading system performance. So the key takeaway is to be explicit in accounting for them when claiming economic significance. There are a lot of backtests out there that don’t…
Quantstrat.
There is a bit of a learning curve with the Quantstrat package but once you get used to it, it’s a solid backtesting platform. In addition, it has other capabilities like optimization and walk-forward testing.
The main issue I have is that it doesn’t natively allow you to execute on the daily close when you get a signal on that day’s close – you need to do a hack. This puts it at a bit of a disadvantage to other software like TradeStation, MultiCharts, NinjaTrader and Amibroker (presumably MatLab too). Hopefully the developers will reconsider this, to help drive higher adoption of their gReat package…
QuantStrat TradeR.
Trading, QuantStrat, R, and more.
Category Archives: QuantStrat.
Nuts and Bolts of Quantstrat, Part V.
This post will be about pre-processing custom indicators in quantstrat–that is, how to add values to your market data that do not arise from the market data itself.
The first four parts of my nuts and bolts of quantstrat were well received. They are even available as a datacamp course. For those that want to catch up to today’s post, I highly recommend the datacamp course.
To motivate this post, the idea is that say you’re using alternative data that isn’t simply derived from a transformation of the market data itself. I. E. you have a proprietary alternative data stream that may predict an asset’s price, you want to employ a cross-sectional ranking system, or any number of things. How do you do this within the context of quantstrat?
The answer is that it’s as simple as binding a new xts to your asset data, as this demonstration will show.
First, let’s get the setup out of the way.
Now, we have our non-derived indicator. In this case, it’s a toy example–the value is 1 if the year is odd (I. E. 2003, 2005, 2007, 2009), and 0 if it’s even. We compute that and simply column-bind (cbind) it to the asset data.
Next, we just have a very simple strategy–buy a share of SPY on odd years, sell on even years. That is, buy when the nonDerivedIndicator column crosses above 0.5 (from 0 to 1), and sell when the opposite occurs.
In conclusion, you can create signals based off of any data in quantstrat. Whether that means volatility ratios, fundamental data, cross-sectional ranking, or whatever proprietary alternative data source you may have access to, this very simple process is how you can use quantstrat to add all of those things to your systematic trading backtest research.
شكرا للقراءة.
Note: I am always interested in full-time opportunities which may benefit from my skills. I have experience in data analytics, asset management, and systematic trading research. If you know of any such opportunities, do not hesitate to contact me on my LinkedIn, found here.
Review: Invoance’s TRAIDE application.
This review will be about Inovance Tech’s TRAIDE system. It is an application geared towards letting retail investors apply proprietary machine learning algorithms to assist them in creating systematic trading strategies. Currently, my one-line review is that while I hope the company founders mean well, the application is still in an early stage, and so, should be checked out by potential users/venture capitalists as something with proof of potential, rather than a finished product ready for mass market. While this acts as a review, it’s also my thoughts as to how Inovance Tech can improve its product.
A bit of background: I have spoken several times to some of the company’s founders, who sound like individuals at about my age level (so, fellow millennials). Ultimately, the selling point is this:
Systematic trading is cool.
Machine learning is cool.
Therefore, applying machine learning to systematic trading is awesome! (And a surefire way to make profits, as Renaissance Technologies has shown.)
While this may sound a bit snarky, it’s also, in some ways, true. Machine learning has become the talk of the town, from IBM’s Watson (RenTec itself hired a bunch of speech recognition experts from IBM a couple of decades back), to Stanford’s self-driving car (invented by Sebastian Thrun, who now heads Udacity), to the Netflix prize, to god knows what Andrew Ng is doing with deep learning at Baidu. Considering how well machine learning has done at much more complex tasks than “create a half-decent systematic trading algorithm”, it shouldn’t be too much to ask this powerful field at the intersection of computer science and statistics to help the retail investor glued to watching charts generate a lot more return on his or her investments than through discretionary chart-watching and noise trading. To my understanding from conversations with Inovance Tech’s founders, this is explicitly their mission.
However, I am not sure that Inovance’s TRAIDE application actually accomplishes this mission in its current state.
Here’s how it works:
Users select one asset at a time, and select a date range (data going back to Dec. 31, 2009). Assets are currently limited to highly liquid currency pairs, and can take the following settings: 1 hour, 2 hour, 4 hour, 6 hour, or daily bar time frames.
Users then select from a variety of indicators, ranging from technical (moving averages, oscillators, volume calculations, etc. Mostly an assortment of 20th century indicators, though the occasional adaptive moving average has managed to sneak in–namely KAMA–see my DSTrading package, and MAMA–aka the Mesa Adaptive Moving Average, from John Ehlers) to more esoteric ones such as some sentiment indicators. Here’s where things start to head south for me, however. Namely, that while it’s easy to add as many indicators as a user would like, there is basically no documentation on any of them, with no links to reference, etc., so users will have to bear the onus of actually understanding what each and every one of the indicators they select actually does, and whether or not those indicators are useful. The TRAIDE application makes zero effort (thus far) to actually get users acquainted with the purpose of these indicators, what their theoretical objective is (measure conviction in a trend, detect a trend, oscillator type indicator, etc.)
Furthermore, regarding indicator selections, users also specify one parameter setting for each indicator per strategy. E. G. if I had an EMA crossover, I’d have to create a new strategy for a 20/100 crossover, a 21/100 crossover, rather than specifying something like this:
Quantstrat itself has this functionality, and while I don’t recall covering parameter robustness checks/optimization (in other words, testing multiple parameter sets–whether one uses them for optimization or robustness is up to the user, not the functionality) in quantstrat on this blog specifically, this information very much exists in what I deem “the official quantstrat manual”, found here. In my opinion, the option of covering a range of values is mandatory so as to demonstrate that any given parameter setting is not a random fluke. Outside of quantstrat, I have demonstrated this methodology in my Hypothesis Driven Development posts, and in coming up for parameter selection for volatility trading.
Where TRAIDE may do something interesting, however, is that after the user specifies his indicators and parameters, its “proprietary machine learning” algorithms (WARNING: COMPLETELY BLACK BOX) determine for what range of values of the indicators in question generated the best results within the backtest, and assign them bullishness and bearishness scores. In other words, “looking backwards, these were the indicator values that did best over the course of the sample”. While there is definite value to exploring the relationships between indicators and future returns, I think that TRAIDE needs to do more in this area, such as reporting P-values, conviction, and so on.
For instance, if you combine enough indicators, your “rule” is a market order that’s simply the intersection of all of the ranges of your indicators. For instance, TRAIDE may tell a user that the strongest bullish signal when the difference of the moving averages is between 1 and 2, the ADX is between 20 and 25, the ATR is between 0.5 and 1, and so on. Each setting the user selects further narrows down the number of trades the simulation makes. In my opinion, there are more ways to explore the interplay of indicators than simply one giant AND statement, such as an “OR” statement, of some sort. (E. G. select all values, put on a trade when 3 out of 5 indicators fall into the selected bullish range in order to place more trades). While it may be wise to filter down trades to very rare instances if trading a massive amount of instruments, such that of several thousand possible instruments, only several are trading at any given time, with TRAIDE, a user selects only *one* asset class (currently, one currency pair) at a time, so I’m hoping to see TRAIDE create more functionality in terms of what constitutes a trading rule.
After the user selects both a long and a short rule (by simply filtering on indicator ranges that TRAIDE’s machine learning algorithms have said are good), TRAIDE turns that into a backtest with a long equity curve, short equity curve, total equity curve, and trade statistics for aggregate, long, and short trades. For instance, in quantstrat, one only receives aggregate trade statistics. Whether long or short, all that matters to quantstrat is whether or not the trade made or lost money. For sophisticated users, it’s trivial enough to turn one set of rules on or off, but TRAIDE does more to hold the user’s hand in that regard.
Lastly, TRAIDE then generates MetaTrader4 code for a user to download.
And that’s the process.
In my opinion, while what Inovance Tech has set out to do with TRAIDE is interesting, I wouldn’t recommend it in its current state. For sophisticated individuals that know how to go through a proper research process, TRAIDE is too stringent in terms of parameter settings (one at a time), pre-coded indicators (its target audience probably can’t program too well), and asset classes (again, one at a time). However, for retail investors, my issue with TRAIDE is this:
There is a whole assortment of undocumented indicators, which then move to black-box machine learning algorithms. The result is that the user has very little understanding of what the underlying algorithms actually do, and why the logic he or she is presented with is the output. While TRAIDE makes it trivially easy to generate any one given trading system, as multiple individuals have stated in slightly different ways before, writing a strategy is the easy part. Doing the work to understand if that strategy actually has an edge is much harder. Namely, checking its robustness, its predictive power, its sensitivity to various regimes, and so on. Given TRAIDE’s rather short data history (2018 onwards), and coupled with the opaqueness that the user operates under, my analogy would be this:
It’s like giving an inexperienced driver the keys to a sports car in a thick fog on a winding road. Nobody disputes that a sports car is awesome. However, the true burden of the work lies in making sure that the user doesn’t wind up smashing into a tree.
Overall, I like the TRAIDE application’s mission, and I think it may have potential as something for the retail investors that don’t intend to learn the ins-and-outs of coding a trading system in R (despite me demonstrating many times over how to put such systems together). I just think that there needs to be more work put into making sure that the results a user sees are indicative of an edge, rather than open the possibility of highly-flexible machine learning algorithms chasing ghosts in one of the noisiest and most dynamic data sets one can possibly find.
My recommendations are these:
1) Multiple asset classes.
2) Allow parameter ranges, and cap the number of trials at any given point (E. G. 4 indicators with ten settings each = 10,000 possible trading systems = blow up the servers). To narrow down the number of trial runs, use techniques from experimental design to arrive at decent combinations. (I wish I remembered my response surface methodology techniques from my master’s degree about now!)
3) Allow modifications of order sizing (E. G. volatility targeting, stop losses), such as I wrote about in my hypothesis-driven development posts.
4) Provide *some* sort of documentation for the indicators, even if it’s as simple as a link to investopedia (preferably a lot more).
5) Far more output is necessary, especially for users who don’t program. Namely, to distinguish whether or not there is a legitimate edge, or if there are too few observations to reject the null hypothesis of random noise.
6) Far longer data histories. 2018 onwards just seems too short of a time-frame to be sure of a strategy’s efficacy, at least on daily data (may not be true for hourly).
7) Factor in transaction costs. Trading on an hourly time frame will mean far less P&L per trade than on a daily resolution. If MT4 charges a fixed ticket price, users need to know how this factors into their strategy.
8) Lastly, dogfooding. When I spoke last time with Inovance Tech’s founders, they claimed they were using their own algorithms to create a forex strategy, which was doing well in live trading. By the time more of these suggestions are implemented, it’d be interesting to see if they have a track record as a fund, in addition to as a software provider.
If all of these things are accounted for and automated, the product will hopefully accomplish its mission of bringing systematic trading and machine learning to more people. I think TRAIDE has potential, and I’m hoping that its staff will realize that potential.
شكرا للقراءة.
NOTE: I am currently contracting in downtown Chicago, and am always interested in networking with professionals in the systematic trading and systematic asset management/allocation spaces. Find my LinkedIn here.
EDIT: Today in my email (Dec. 3, 2018), I received a notice that Inovance was making TRAIDE completely free. Perhaps they want a bunch more feedback on it?
Why Backtesting On Individual Legs In A Spread Is A BAD Idea.
So after reading the last post, the author of quantstrat had mostly critical feedback, mostly of the philosophy that prompted its writing in the first place. Basically, the reason I wrote it, as I stated before, is that I’ve seen many retail users of quantstrat constantly ask “how do I model individual spread instruments”, and otherwise try to look like they’re sophisticated by trading spreads.
The truth is that real professionals use industrial-strength tools to determine their intraday hedge ratios (such a tool is called a spreader). The purpose of quantstrat is not to be an execution modeling system, but to be a *strategy* modeling system. Basically, the purpose of your backtest isn’t to look at individual instruments, since in the last post, the aggregate trade statistics told us absolutely nothing about how our actual spread trading strategy performed. The backtest was a mess as far as the analytics were concerned, and thus rendering it more or less useless. So this post, by request of the author of quantstrat, is about how to do the analysis better, and looking at what matters more–the actual performance of the strategy on the actual relationship being traded–namely, the *spread*, rather than the two components.
So, without further ado, let’s look at the revised code:
In this case, things are a LOT simpler. Rather than jumping through the hoops of pre-computing an indicator, along with the shenanigans of separate rules for both the long and the short end, we simply have a spread as it’s theoretically supposed to work–three of an unleveraged ETF against the 3x leveraged ETF, and we can go long the spread, or short the spread. In this case, the dynamic seems to be on the up, and we want to capture that.
So how did we do?
And here’s the output:
In other words, the typical profile for a trend follower, rather than the uninformative analytics from the last post. Furthermore, the position sizing and equity curve chart actually make sense now. Here they are.
To conclude, while it’s possible to model spreads using individual legs, it makes far more sense in terms of analytics to actually examine the performance of the strategy on the actual relationship being traded, which is the spread itself. Furthermore, after constructing the spread as a synthetic instrument, it can be treated like any other regular instrument in the context of analysis in quantstrat.
شكرا للقراءة.
NOTE: I am a freelance consultant in quantitative analysis on topics related to this blog. If you have contract or full time roles available for proprietary research that could benefit from my skills, please contact me through my LinkedIn here.
A Way To Model Execution On Individual Legs Of A Spread In Quantstrat.
In this post, I’ll attempt to address a question I’ve seen tossed around time and again regarding quantstrat.
“How do I model executions on individual underlying instruments in spread trading?”
First off, a disclaimer: this method is a bit of a kludge, and in using it, you’ll lose out on quantstrat’s inbuilt optimization functionality. Essentially, it builds upon the pre-computed signal methodology I described in a previous post.
Essentially, by appending a column with the same name but with different values to two separate instruments, I can “trick” quantstrat into providing me desired behavior by modeling trading on two underlying instruments.
SO here’s the strategy:
Go long 3 shares of the UNG (natural gas) ETF against 1 share of UGAZ (3x bull) when the spread crosses above its 20-day exponential moving average, otherwise, do nothing. Here’s the reasoning as to why:
With the corresponding plot:
So, as you can see, we have a spread that drifts upward (something to do with the nature of the leveraged ETF)? So, let’s try and capture that with a strategy.
The way I’m going to do that is to precompute a signal–whether or not the spread’s close is above its EMA20, and append that signal to UNG, with the negative of said signal appended to UGAZ, and then encapsulate it in a quantstrat strategy. In this case, there’s no ATR order sizing function or initial equity–just a simple 3 UNG to 1 UGAZ trade.
So, did our spread trade work?
نوعا ما. However, when you think about it–looking at the statistics on a per-instrument basis in a spread trade is a bit of a red herring. After all, outside of a small spread, what one instrument makes, another will lose, so the aggregate numbers should be only slightly north of 1 or 50% in most cases, which is what we see here.
A better way of looking at whether or not the strategy performs is to look at the cumulative sum of the daily P&L.
With the following equity curve:
Is this the greatest equity curve? على الاغلب لا. In fact, after playing around with the strategy a little bit, it’s better to actually get in at the close of the next day than the open (apparently there’s some intraday mean-reversion).
Furthermore, one thing to be careful of is that in this backtest, I made sure that for UNG, my precomputedSig would only take values 1 and 0, and vice versa for the UGAZ variant, such that I could write the rules I did. If it took the values 1, 0, and -1, or 1 and -1, the results would not make sense.
In conclusion, the method I showed was essentially a method building on a previous technique of pre-computing signals. Doing this will disallow users to use quantstrat’s built-in optimization functionality, but will allow users to backtest individual leg execution.
To answer one last question, if one wanted to short the spread as well, the thing to do using this methodology would be to pre-compute a second column called, say, precomputedSig2, that behaved the opposite way.
شكرا للقراءة.
NOTE: I am a freelance consultant in quantitative analysis on topics related to this blog. If you have contract or full time roles available for proprietary research that could benefit from my skills, please contact me through my LinkedIn here.
Predicting High Yield with SPY–a Two Part Post.
This post will cover ideas from two individuals: David Varadi of CSS Analytics with whom I am currently collaborating on some volatility trading strategies (the extent of which I hope will end up as a workable trading strategy–my current replica of some of VolatilityMadeSimple’s publicly displayed “example” strategies (note, from other blogs, not to be confused with their proprietary strategy) are something that I think is too risky to be traded as-is), and Cesar Alvarez, of Alvarez Quant Trading. If his name sounds familiar to some of you, that’s because it should. He used to collaborate (still does?) with Larry Connors of TradingMarkets, and I’m pretty sure that sometime in the future, I’ll cover those strategies as well.
The strategy for this post is simple, and taken from this post from CSS Analytics.
Pretty straightforward–compute a 20-day SMA on the SPY (I use unadjusted since that’s what the data would have actually been). When the SPY’s close crosses above the 20-day SMA, buy the high-yield bond index, either VWEHX or HYG, and when the converse happens, move to the cash-substitute security, either VFISX or SHY.
Now, while the above paragraph may make it seem that VWEHX and HYG are perfect substitutes, well, they aren’t, as no two instruments are exactly alike, which, as could be noted from my last post, is a detail that one should be mindful of. Even creating a synthetic “equivalent” is never exactly perfect. Even though I try my best to iron out such issues, over the course of generally illustrating an idea, the numbers won’t line up exactly (though hopefully, they’ll be close). In any case, it’s best to leave an asterisk whenever one is forced to use synthetics for the sake of a prolonged backtest.
The other elephant/gorilla in the room (depending on your preference for metaphorical animals), is whether or not to use adjusted data. The upside to that is that dividends are taken into account. The *downside* to that is that the data isn’t the real data, and also assumes a continuous reinvestment of dividends. Unfortunately, shares of a security are not continuous quantities–they are discrete quantities made so by their unit price, so the implicit assumptions in adjusted prices can be optimistic.
For this particular topic, Cesar Alvarez covered it exceptionally well on his blog post, and I highly recommend readers give that post a read, in addition to following his blog in general. However, just to illustrate the effect, let’s jump into the script.
في ما يلي النتائج:
Which produces the following equity curves:
As can be seen, the choice to adjust or not can be pretty enormous. Here are the corresponding three statistics:
Even without the adjustment, the strategy itself is…very very good, at least from this angle. Let’s look at the ETF variant now.
The resultant equity curve:
With the corresponding statistics:
Again, another stark difference. Let’s combine all four variants.
The equity curve:
With the resulting statistics:
In short, while the strategy itself seems strong, the particular similar (but not identical) instruments used to implement the strategy make a large difference. So, when backtesting, make sure to understand what taking liberties with the data means. In this case, by turning two levers, the Sharpe Ratio varied from less than 1 to above 4.
Next, I’d like to demonstrate a little trick in quantstrat. Although plenty of examples of trading strategies only derive indicators (along with signals and rules) from the market data itself, there are also many strategies that incorporate data from outside simply the price action of the particular security at hand. Such examples would be many SPY strategies that incorporate VIX information, or off-instrument signal strategies like this one.
The way to incorporate off-instrument information into quantstrat simply requires understanding what the mktdata object is, which is nothing more than an xts type object. By default, a security may originally have just the OHLCV and open interest columns. Most demos in the public space generally use data only from the instruments themselves. However, it is very much possible to actually pre-compute signals.
Here’s a continuation of the script to demonstrate, with a demo of the unadjusted HYG leg of this trade:
As you can see, no indicators computed from the actual market data, because the strategy used a pre-computed value to work off of. The lowest-hanging fruit of applying this methodology, of course, would be to append the VIX index as an indicator for trading strategies on the SPY.
And here are the results, trading a unit quantity:
And the corresponding position chart:
Lastly, here are the vanguard links for VWEHX and VFISX. Apparently, neither charge a redemption fee. I’m not sure if this means that they can be freely traded in a systematic fashion, however.
In conclusion, hopefully this post showed a potentially viable strategy, understanding the nature of the data you’re working with, and how to pre-compute values in quantstrat.
شكرا للقراءة.
Note: I am a freelance consultant in quantitative analysis on topics related to this blog. If you have contract or full time roles available for proprietary research that could benefit from my skills, please contact me through my LinkedIn here.
Nuts and Bolts of Quantstrat, Part IV.
This post will provide an introduction to the way that rules work in quantstrat. It will detail market orders along with order-sizing functions (limit orders will be saved for a later date). After this post, readers should be able to understand the strategies written in my blog posts, and should be able to write their own. Unlike indicators and signals, rules usually call one function, which is called “ruleSignal” (there is a function that is specifically designed for rebalancing strategies, but it’s possible to do that outside the bounds of quantstrat). For all intents and purposes, this one function handles all rule executions. However, that isn’t to say that rules cannot be customized, as the ruleSignal function has many different arguments that can take in one of several values, though not all permutations will be explored in this post. Let’s take a look at some rules:
In this case, the first thing to note is that as quantstrat is an R library, it can also incorporate basic programming concepts into the actual strategy formulation. In this case, depending on a meta-parameter (that is, a parameter not found in the argument of any indicator, signal, or rule) called atrOrder (a boolean), I can choose which rule I wish to add to the strategy configuration.
Next, here’s the format for adding a rule:
1) The call to add. rule.
2) The name of the strategy (strategy. st)
3) The name of the strategy function (this is usually “ruleSignal”)
4) The arguments to ruleSignal:
a) The signal column (sigCol)
b) the value that signals a trigger (sigVal)
c) the order type (ordertype)
d) the order side (orderside)
e) to replace any other open signal (replace)
f) The order quantity (orderqty) is no order-sizing function is used.
g) the preferred price (prefer, defaults to Close, but as quantstrat is a next-bar system, I use the open)
h) the order sizing function (osFUN)
i) the arguments to the order-sizing function.
j) There are other arguments to different order types, but we’ll focus on market orders for this post.
5) The rule type (type), which will comprise either “enter” or “exit” for most demos.
6) The path. dep argument, which is always TRUE.
7) (Not shown) the label for the rule. If you’re interested in writing your demos as quickly as possible, these are not necessary if your entry and exit rules are your absolute final points of logic in your backtest. However, if you wish to look at your orders in detail, or use stop-losses/take-profit orders, then the rules need labels, as well.
While most of the logic to adding your basic rule is almost always boilerplate outside the arguments to ruleSignal, it’s the arguments to ruleSignal that allow users to customize rules.
The sigCol argument is a string that has the exact name of the signal column that you wish to use to generate your entries (or exits) from. This is the same string that went into the label argument of your add. signal function calls. In quantstrat, labels effectively act as logical links between indicators, signals, rules, and more.
The sigVal argument is what value to use to trigger rule logic. Since signal output (so far) is comprised of ones (TRUE) and zeroes (FALSE), I set my sigVal to TRUE. It is possible, however, to make a sigSum rule and then allow the sigVal argument to take other values.
The ordertype argument is the order type. For most of my demos that I’ve presented thus far, I’ve mostly used “market” type orders, which are the simplest. Market orders execute at the next bar after receiving the signal. They do not execute on the signal bar, but the bar after the signal bar. On daily data, this might cause some P/L due to gaps, but on intraday data, the open of the next bar should be very similar to the close of current bar. One thing to note is that using monthly data, quantstrat uses current-bar execution.
The orderside argument takes one of two values–“long” or “short”. This separates rule executions into two bins, such that long sells won’t work on short positions and vice versa. It also serves to add clarity and readability to strategy specifications.
The replace argument functions in the following way: if TRUE, it overrides any other signal on the same day. Generally, I avoid ever setting this to true, as order sets (not shown in this post) exist deliberately to control order replacement. However, for some reason, it defaults to TRUE in quantstrat, so make sure to set it to FALSE whenever you write a strategy.
The orderqty argument applies only when there’s no osFUN specified. It can take a flat value (E. G. 1, 2, 100, etc.), or, when the rule type is “exit”, a quantity of “all”, to flatten a position. In all the sell rules I use in my demos, my strategies do not scale out of positions, but merely flatten them out.
The prefer argument exists for specifying what aspect of a bar a trade will get in on. Quantstrat by default executes at the close of the next bar. I set this argument to “Open” instead to minimize the effect of the next bar transaction.
The osFUN specifies the order-sizing function to use. Unlike the functions passed into the name arguments in quantstrat (for indicators, signals, or rules), the osFUN argument is actually a function object (that is, it’s the actual function, rather than its name) that gets passed in as an argument. Furthermore, and this is critical: all arguments *to* the order-sizing function must be passed into the arguments for ruleSignal. They are covered through the ellipsis functionality that most R functions include. The ellipsis means that additional arguments can be passed in, and these additional arguments usually correspond to functions used inside the original function that’s called. This, of course, has the potential to violate the black-box modular programming paradigm by assuming users know the inner-workings of pre-existing code, but it offers additional flexibility in instances such as these. So, to give an example, in my entry rule that uses the osDollarATR order-sizing function, arguments such as pctATR and tradeSize are not arguments to the ruleSignal function, but to the osDollarATR function. Nevertheless, the point to pass them in when constructing a quantstrat strategy is in the arguments to ruleSignal.
If you do not wish to use an osFUN, simply use a flat quantity, such as 100, or if using exit type orders, use “all” to flatten a position.
Moving outside the arguments to ruleSignal, we have several other arguments:
The type argument takes one of several values–but “enter” and “exit” are the most basic. They do exactly as they state. There are other rule types, such as “chain” (for stop-losses), which have their own mechanics, but for now, know that “enter” and “exit” are the two basic rules you need to get off the ground.
The path. dep argument should always be TRUE for the ruleSignal function.
Finally, add. rule also contains a label argument that I do not often use in my demos, as usually, my rules are the last point of my logic. However, if one wants to do deeper strategy analysis using the order book, then using these labels is critical.
After adding rules, you can simply call applyStrategy and run your backtest. Here’s an explanation of how that’s done:
As an explanation, I enclose the applyStrategy call in some code to print how much time the backtest took. Generally, on these twelve years of daily data, a single market may take between several seconds to thirty seconds (if a strategy has hundreds of trades per market).
The next four lines essentially update the objects initialized in order of dependency: first the portfolio, then the account for a given date range (the duration of the backtest), and then compute the end equity.
This concludes the basic nuts and bolts of creating a basic nuts and bolts strategy in quantstrat. On this blog, when I make more use of other features, I’ll dedicate other nuts and bolts sections so that readers can use all of quantstrat’s features more efficiently.
شكرا للقراءة.
Nuts and Bolts of Quantstrat, Part III.
This post will focus on signals in quantstrat.
In comparison to indicators, signals in quantstrat are far more cut-and-dry, as they describe the interaction of indicators with each other–whether that indicator is simply the close price (“Close”), or a computed indicator, there are only so many ways indicators can interact, and the point of signals is to provide the user with a way of describing these relationships–is one greater than another, is the concern only when the cross occurs, does the indicator pass above or below a certain number, etc.
Here’s the code that will provide the example for the demonstration, from the atrDollarComparison strategy:
Adding signals to a strategy has a very similar format to adding indicators. The structure is very similar:
1) The call to add. signal.
2) The name of the strategy (again, strategy. st makes this very simple)
3) The name of the signal function (the majority of which are on display in the preceding block of code)
4) The arguments to said signal function, passed in the same way they are to indicators (that is, arguments=list(args)), but which are far more similar compared to indicators.
5) The label for the signal column, which is highly similar to the labeling for indicator columns.
The first two steps are identical to the add. indicator step, except with add. signal instead of add. indicator. This is cut and dry.
Beyond this, all of the signal functions I use are presented above. هم انهم:
sigComparison, sigThreshold, sigAND, and sigCrossover.
The arguments for all four are very similar. They contain some measure of columns , a threshold , a relationship between the first and second column (or between the first column and the threshold), and whether the signal should return TRUE for the entire duration of the relationship being true, or only on the first day, with the cross argument.
Relationships are specified with a two or three character identifier: “gt” stands for greater than (E. G. SMA50 > SMA200), “gte” stands for greater than or equal to, “lt” and “lte” work similarly, and “eq” stands for equal to (which may be useful for certain logic statements such as “stock makes a new seven-day low”, which can be programmed by comparing the close to the running seven-day min, and checking for equality).
Here’s an explanation of all four sig functions:
The sigComparison function compares two columns, and will return TRUE (aka 1) so long as the specified relationship comparing the first column to the second holds. E. G. it will return 1 if you specify SMA50 > SMA200 for every timestamp (aka bar, for those using OHLC data) that the 50-day SMA is greater than the 200-day SMA. The sigComparison function is best used for setting up filters (EG the classic Close > SMA200 formation). This function takes two columns, and a relationship comparing the first to the second columns.
The sigCrossover is identical to the above, except only returns TRUE on the timestamp (bar) that the relationship moves from FALSE to TRUE. E. G. going with the above example, you would only see TRUE the day that the SMA50 first crossed over the SMA200. The sigCrossover is useful for setting up buy or sell orders in trend-following strategies.
The sigThreshold signal is identical to the two above signals (depending on whether cross is TRUE or FALSE), but instead uses a fixed quantity to compare one indicator to, passed in via the threshold argument. For instance, one can create a contrived example of an RSI buy order with a sigCrossover signal with an RSI indicator and an indicator that’s nothing but the same identical buy threshold all the way down, or one can use the sigThreshold function wherever oscillator-type indicators or uniform-value type indicators (E. G. indicators transformed with a percent rank), wherever all such indicators are involved.
Lastly, the sigAND signal function, to be pedantic, can also be called colloquially as sigIntersect. It’s a signal function I wrote (from my IKTrading package) that checks if multiple signals (whether two or more) are true at the same time, and like the sigThreshold function, can be set to either return all times that the condition holds, or the first day only. I wrote sigAND so that users would be able to structurally tie up multiple signals, such as an RSI threshold cross coupled with a moving-average filter. While quantstrat does have a function called sigFormula, it involves quoted code evaluation, which I wish to minimize as much as possible. Furthermore, using sigAND allows users to escalate the cross clause, meaning that the signals that are used as columns can be written as comparisons, rather than as crosses. E. G. in this RSI 20/80 filtered on SMA200 strategy, I can simply compare if the RSI is less than 20, and only generate a buy rule at the timestamp after both RSI is less than 20 AND the close is greater than its SMA200. It doesn’t matter whether the close is above SMA200 and the RSI crosses under 20, or if the RSI was under 20, and the close crossed above its SMA200. Either combination will trigger the signal.
One thing to note regarding columns passed as arguments to the signals: quantstrat will do its best to “take an educated guess” regarding which column the user attempts to refer to. For instance, when using daily data, the format may often be along the lines of XYZ. Open XYZ. High XYZ. Low XYZ. Close, so when “Close” is one of the arguments, quantstrat will make its best guess that the user means the XYZ. Close column. This is also, why, once again, I stress that reserved keywords (OHLC keywords, analogous tick data keywords) should not be used in labeling. Furthermore, unlike indicators, whose output will usually be something along the lines of FUNCTION_NAME. userLabel, labels for signals are as-is, so what one passes into the label argument is what one gets.
To put it together, here is the chunk of code again, and the English description of what the signals in the chunk of code do:
1) The first signal checks to see if the “Close” column is greater than (“gt”) the “sma” column (which had a setting of 200), and is labeled “filter”.
2) The second signal checks to see if the “rsi” column is less than (“lt”) the threshold of buyThresh (which was defined earlier as 20), and is labeled as “rsiLtThresh”.
3) The third signal checks when both of the above signals became TRUE for the first time, until one or the other condition becomes false, and is labeled as “longEntry”. NB: the signals themselves do not place the order–I just like to use the label “longEntry” as this allows code in the rules logic to be reused quicker.
4) The fourth signal checks if the “rsi” column crossed over the sell threshold (80), and is labeled as “longExit”.
5) The fifth signal checks if the “Close” column crossed under the “sma” column, and is labeled “filterExit”.
In quantstrat, it’s quite feasible to have multiple signals generate entry orders, and multiple signals generate exit orders. However, make sure that the labels are unique.
The next post will cover rules.
شكرا للقراءة.
Nuts and Bolts of Quantstrat, Part II.
Last week, I covered the boilerplate code in quantstrat.
This post will cover parameters and adding indicators to strategies in quantstrat.
Let’s look at a the code I’m referring to for this walkthrough:
This code contains two separate chunks–parameters and indicators. The parameters chunk is simply a place to store values in one area, and then call them as arguments to the add. indicator and add. signal functions. Parameters are simply variables assigned to values that can be updated when a user wishes to run a demo (or in other ways, when running optimization processes).
Indicators are constructs computed from market data, and some parameters that dictate the settings of the function used to compute them. Most well-known indicators, such as the SMA (simple moving average), EMA, and so on, usually have one important component, such as the lookback period (aka the ubiquitous n). These are the parameters I store in the parameters chunk of code.
Adding an indicator in quantstrat has five parts to it. هم انهم:
1) The add. indicator function call.
2) The name of the strategy to add the indicator to (which I always call strategy. st, standing for strategy string)
3) The name of the indicator function, in quotes (E. G. such as “SMA”, “RSI”, etc.)
4) The arguments to the above indicator function, which are the INPUTS in this statement arguments=list(INPUTS)
5) The label that signals and possibly rules will use–which is the column name in the mktdata object.
Notice that the market data (mktdata) input to the indicators has a more unique input style, as it’s wrapped in a quote() function call. This quote function call essentially tells the strategy that the strategy will obtain the object referred to in the quotes later. The mktdata object is initially the OHLCV(adjusted) price time series one originally obtains from yahoo (or elsewhere), as far as my demos will demonstrate for the foreseeable future. However, the mktdata object will later come to contain all of the indicators and signals added within the strategy. So because of this, here are some functions that one should familiarize themselves with regarding some time series data munging:
Op: returns all columns in the mktdata object containing the term “Open”
Hi: returns all columns in the mktdata object containing the term “High”
Lo: returns all columns in the mktdata object containing the term “Low”
Cl: returns all columns in the mktdata object containing the term “Close”
Vo: returns all columns in the mktdata object containing the term “Volume”
HLC: returns all columns in the mktdata object containing “High”, “Low”, or “Close”.
OHLC: same as above, but includes “Open”.
These all ignore case.
For these reasons, please avoid using these “reserved” terms when labeling (that is, column naming in step 5) your indicators/signals/rules. One particularly easy mistake to make is using the word “slow”. For instance, a naive labeling convention may be to use “maFast” and “maSlow” as labels for, say, a 50-day and 200-day SMA, respectively, and then maybe implement an indicator that uses an HLC for an argument, such as ATR. This may create errors down the line when more than one column has the name “Low”. In the old (CRAN) version of TTR–that is, the version that gets installed if one simply types in.
the SMA function will still append the term “Close” to the output. I’m sure some of you have seen some obscure error when calling applyStrategy. It might look something like this:
This arises as the result of bad labeling. The CRAN version of TTR runs into this from time to time, and if you’re stuck on that version, a kludge to work around this is instead of using.
في حين أن. That [,1] specifies only the first column in which the term “Close” appears. However, I simply recommend upgrading to a newer version of TTR from R-forge. On Windows, this means using R 3.0.3 rather than 3.1.1, due to R-forge’s lack of binaries for Windows for the most recent version of TTR (only source is available), at least as of the time of this writing.
On a whole, however, I highly recommend avoiding reserved market data keywords (open, high, low, close, volume, and analogous keywords for tick data) for labels.
One other aspect to note about labeling indicators is that the indicator column name is not merely the argument to “label”, but rather, the label you provide is appended onto the output of the function. In DSTrading and IKTrading, for instance, all of the indicators (such as FRAMA) come with output column headings. So, when computing the FRAMA of a time series, you may get something like this:
When adding indicators, the user-provided label will come after a period following the initial column name output, and the column name will be along the lines of “FunctionOutput. userLabel”.
Beyond pitfalls and explanations of labeling, the other salient aspect of indicators is the actual indicator function that’s called, and how its arguments function.
When adding indicators, I use the following format:
This is how these two aspects work:
The INDICATOR_FUNCTION is an actual R function that should take in some variant of an OHLC object (whether one column–most likely close, HLC, or whatever else). Functions such as RSI, SMA, and lagATR (from my IKTrading library) are all examples of such functions. To note, there is nothing “official” as opposed to “custom” about the functions I use for indicators. Indicators are merely R functions (that can be written by any R user) that take in a price series as one of the arguments.
The inputs to these functions are enclosed in the arguments input to the add. indicator function. That is, the part of the syntax that looks like this:
These arguments are the inputs to the function. For instance, if one would write:
In this case, x is a time series based on the market data (that is, the mktdata object), and n is a parameter. As pointed out earlier, the syntax for the mktdata involves the use of the quote function. However, all other parameters to the SMA (or any other) function call are static, at least per individual backtest (these can vary when doing optimization/parameter exploration). Thus, for the classic 200-day simple moving average, the appropriate syntax would contain:
In my backtests, I store the argument to n above the add. indicator call in my parameters chunk of code for ease of location. The reason for this is that when adding multiple indicators, signals, and rules, it’s fairly easy to lose track of a hard-coded value among the interspersed code, so I prefer to keep my numerical values collected in one place and reference them in the actual indicator, signal, and rule syntax.
Lastly, one final piece of advice is that when constructing a strategy, one need not have all the signals and rules implemented just to check how the indicators will be added to the mktdata object. Instead, try this, after running the code through the add. indicator syntax and no further if you’re ever unsure what your mktdata object will look like. Signals (at least in my demos) will start off with a commented.
bit of syntax. If you see that line, you know that there are no more indicators to add. In any case, the following is a quick way of inspecting indicator output.
For example, using XLB:
Which would give the output:
This allows a user to see how the indicators will be appended to the mktdata object in the backtest. If the call to applyIndicators fails, it means that there most likely is an issue with labeling (column naming).
Next week, I’ll discuss signals, which are a bit more defined in scope.
شكرا للقراءة.
Nuts and Bolts of Quantstrat, Part I.
Recently, I gave a webinar on some introductory quantstrat. Here’s the link.
So to follow up on it, I’m going to do a multi-week series of posts delving into trying to explain the details of parts of my demos, so as to be sure that everyone has a chance to learn and follow along with my methodologies, what I do, and so on. To keep things simple, I’ll be using the usual RSI 20/80 filtered on SMA 200 demo. This post will deal with the initial setup of any demo–code which will be largely similar from demo to demo.
Let’s examine this code:
The first three lines load the libraries I use in my demos. In R, libraries are loaded with a single line. However, installation procedures may vary from operating system to operating system. Windows systems are the least straightforward, while macs can use unix functionality to function in identical ways to linux machines. It’s often good practice to place functions used repeatedly into a package, which is R’s own version of encapsulation and information hiding. Packages don’t always have to be open-sourced to the internet, and in many cases, some are used just as local repositories. My IKTrading package started off as such a case; it’s simply a toolbox that contains functionality that isn’t thematically attributable in other places.
The next three lines, dealing with dates, all have separate purposes.
The initDate variable needs a date that must occur before the start of data in a backtest. If this isn’t the case, the portfolio will demonstrate a massive drawdown on the initialization date, and many of the backtest statistics will be misleading or nonsensical.
The from and to variables are endpoints on the data that the demoData. R script will use to fetch from yahoo (or elsewhere). The format is yyyy-mm-dd, which means four digit year, two digit month (E. G. January is “01”), and two digit day, in that order.
In some cases, I may write the code:
This just sets the current to date to the time that I run the demonstration. Although it may affect the replication of the results, thanks to some of the yearly metrics I’ve come to utilize, those wishing to see the exact day of the end of the data would be able to. However, in cases that I use data to the present, it’s often simply an exploration of the indicator as opposed to trying to construct a fully-fledged trading system.
The options(width=70) line simply controls the width of output to my R console.
The source line is a way to execute other files in the specified directory. Sourcing files works in similar ways to specifying a file path. So if, from your current directory, there’s a file you want to source called someFile, you may write a command such as source(“someFile. R”), but if said file is in a different directory, you would want to use the standard unix file navigation notation to execute it. For instance, if my directory was in “IKTrading”, rather than “IKTrading/demo”, I would write source(“demo/demoData. R”). Note that this notation is relative to my current directory. To see your current working directory, type:
To navigate among your working directories, use the setwd command, such as:
In order to obtain the data, let’s look at the demoData. R file once again.
The getSymbols. warning4.0=FALSE line is simply to remove the initial warning that comes from getting symbols from yahoo. It makes no difference to how the demo runs.
The next two lines are critical.
Currency must be initialized for every demo. Thus far, I’ve yet to see it set to anything besides USD (U. S. Dollars), however, the accounting analytics back-end systems need to know what currency the prices are listed in. So the currency line cannot be skipped, or the demo will not work.
Next, the Sys. setenv(TZ=”UTC”) line is necessary because if you look at, say, the data of XLB, and look at the class of its index, here’s what you see:
Since the index of the data is a Date type object, in order for certain orders to work, such as chain rules (which contain stop losses and take profits), the timezone has to be set as UTC, since that’s the time zone for a “Date” class object. If the demo uses the system’s default timezone instead, the timestamps will not match, and so, there will be order failures.
The symbols assignment is simply one long string vector. Here it is, once again:
There is nothing particularly unique about it. However, I structured the vector so as to be able to comment with the description of each ETF next to its ticker string for the purposes of clarity.
From there, the file gets the symbols from yahoo. The extra verbosity around the command is simply to suppress any output to the screen. Here’s the line of code that does this:
I can control whether or not to rerun the data-gathering process by removing XLB from my current working environment. This isn’t the most general way of controlling the data cache (a more general boolean would be better general style), but it works for the examples I use. If I keep XLB in my working environment, then this line is skipped altogether, to speed up the backtest.
Lastly, the backtest needs the instrument specifications. This is the line of code to do so:
Although it looks fairly trivial at the moment, once a backtest would start dealing with futures, contract multiplier specifications, and other instrument-specific properties, this line becomes far less trivial than it looks.
Moving back to the main scripts, here is the rest of the initialization boilerplate:
The tradeSize and initEq variables are necessary in order to compute returns at the end of a backtest. Furthermore, tradeSize is necessary for the osDollarATR order-sizing function.
Next, I name the strategy, portfolio, and account–all with the same name. The x <- y <- z <- "xyz" format is a multi-assignment syntax that should be used only for assigning several objects that are all initially (or permanently) identical to one another, such as initializing multiple xts objects of the same length.
Next, the removal of the strategy is necessary for rerunning the strategy. If the strategy object exists, and a user attempts to rerun the demo, the demo will crash. Always make sure to remove the strategy.
Next, we have the three initialization steps. Due to the dependencies between the portfolio, account, and orders, the portfolio must be initialized before the account, and it also must be initialized before the orders.
To initialize the portfolio, one needs to name the portfolio something, have a vector of character strings that represent the symbols passed in as the symbols argument, an initial date (initDate), and a currency. This currency was defined earlier in the demoData. R file.
The account initialization replaces the symbols argument with a portfolios argument, and adds an initEq argument, from which to compute returns.
Lastly, the orders initialization needs only a portfolio to reference, and a date from which to begin transactions (initDate, which is earlier than the beginning of the data).
Lastly, we initialize the strategy in the form of the line:
This is where we put all our indicators, signals, and rules. Without supplying this line to the demo, none of the indicators, signals, and rules will know what strategy object to look for, and the demo will crash.
This concludes the initial boilerplate walkthrough. Next: parameters and indicators.
شكرا للقراءة.
The Limit of ATR Order Sizing.
Before beginning this post, I’d like to notify readers that I have a webcast tomorrow (Wednesday, Sep. 3) at 4:30 EST for Big Mike’s Trading. Those that can follow the code and the analytics on this blog will see nothing new, but for those that effectively “nod and wait for the punchline” in the form of the equity curve, I’ll demonstrate how to build a strategy “in real time”.
Now onto the post:
While the last post showed how ATR did a better job than raw dollar positions of equalizing risk in the form of standard deviations across instruments, it isn’t the be-all, end-all method of order sizing. Something I learned about recently was portfolio component expected shortfall (along with portfolio component standard deviation). The rabbit hole on these methods runs very deep, including to a paper in the Journal of Risk. To give a quick summary of this computation, it’s one that takes into account not just the well-known mean and covariance, but also interactions between higher order moments, such as co-skewness, and co-kurtosis. The actual details of the math behind this is quite extensive, but luckily, it’s already programmed into the PerformanceAnalytics package, so computing it is as simple as calling a pre-programmed procedure. This demo will, along the way of making yet another comparison between ATR and dollar order sizes, demonstrate one way of doing this.
For those that are unfamiliar with the terminology, expected shortfall is also known as conditional value-at-risk (aka CVaR), which is a coherent risk measure, while regular value at risk is not (for instance, take the example of two bonds each with a default probability of less than 5%, say, 4.95% — the 5% VaR of either of them is 0, but the 5% VaR of the two bond portfolio is greater than zero (or less, depending on how you express the quantity–as a portfolio value, or loss value)).
In any case, here’s the code:
This is the resulting image:
And the corresponding data which was used to generate the box plot:
As can be seen in the image, which is a box plot of the various ways of computing the percentage of portfolio component risk for the two order types, ATR order sizing still does a better job than raw dollar order sizing in terms of controlling risk. However, as evidenced by the atrES box plot, there still is a somewhat wide distribution in terms of contributions to portfolio risk between the various instruments. However, even in this instance of portfolio component risk, it’s readily visible how the ATR order sizing improves on dollar order sizing. However, this also demonstrates how ATR order sizing isn’t the be-all, end-all method of portfolio allocations.
For future note, the application of portfolio component risk metrics is to optimize them in one of two ways–by minimizing the difference between them, or by striving to set them as close to equal to each other as possible (that is, portfolio component risk balance). The PortfolioAnalytics package provides methods on how to do that, which I’ll visit in the future.
No comments:
Post a Comment