{"id":12,"date":"2013-02-03T13:54:04","date_gmt":"2013-02-03T10:24:04","guid":{"rendered":"http:\/\/vua.nadiran.com\/?p=12"},"modified":"2013-04-07T07:28:51","modified_gmt":"2013-04-07T03:58:51","slug":"%d8%ac%d8%b2%d9%88%d9%87-esl-%db%8c%d8%a7%d8%af%da%af%db%8c%d8%b1%db%8c-%d8%b1%db%8c%d8%a7%d8%b6%db%8c%d8%a7%d8%aa","status":"publish","type":"post","link":"https:\/\/vua.nadiran.com\/?p=12","title":{"rendered":"\u062c\u0632\u0648\u0647 ESL  \u06cc\u0627\u062f\u06af\u06cc\u0631\u06cc \u0631\u06cc\u0627\u0636\u06cc\u0627\u062a"},"content":{"rendered":"<p style=\"text-align: center;\">Download PDF : \u00a0<a title=\"element of statistical learning\" href=\"http:\/\/www.stanford.edu\/~hastie\/local.ftp\/Springer\/OLD\/\/ESLII_print10.pdf\" target=\"_blank\">The Elements of Statistical Learning<br \/>\n<\/a><\/p>\n<h1 id=\"firstHeading\" lang=\"fa\" style=\"text-align: left;\">\u062a\u0639\u0631\u06cc\u0641 \u06cc\u0627\u062f\u06af\u06cc\u0631\u06cc \u0628\u0627 \u0646\u0638\u0627\u0631\u062a\u00a0<a title=\"\u06cc\u0627\u062f \u06af\u06cc\u0631\u06cc \u0628\u0627 \u0646\u0638\u0627\u0631\u062a \" href=\"http:\/\/fa.wikipedia.org\/wiki\/%DB%8C%D8%A7%D8%AF%DA%AF%DB%8C%D8%B1%DB%8C_%D8%A8%D8%A7_%D9%86%D8%B8%D8%A7%D8%B1%D8%AA\" target=\"_blank\">Supervised learning<\/a><\/h1>\n<p><!--more--><\/p>\n<p dir=\"ltr\" style=\"direction: ltr; text-align: left;\">Preface to the Second Edition vii<\/p>\n<p dir=\"ltr\" style=\"direction: ltr; text-align: left;\">Preface to the First Edition xi<\/p>\n<p dir=\"ltr\" style=\"direction: ltr; text-align: left;\">\u06f1 Introduction 1<\/p>\n<p dir=\"ltr\" style=\"direction: ltr; text-align: left;\">\u06f2 Overview of Supervised Learning 9<\/p>\n<p dir=\"ltr\" style=\"direction: ltr; text-align: left;\">\u06f2\u066b\u06f1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 9<\/p>\n<p dir=\"ltr\" style=\"direction: ltr; text-align: left;\">\u06f2\u066b\u06f2 Variable Types and Terminology . . . . . . . . . . . . . . 9<\/p>\n<p dir=\"ltr\" style=\"direction: ltr; text-align: left;\">\u06f2\u066b\u06f3 Two Simple Approaches to Prediction:<\/p>\n<p dir=\"ltr\" style=\"direction: ltr; text-align: left;\">Least Squares and Nearest Neighbors . . . . . . . . . . . 11<\/p>\n<p dir=\"ltr\" style=\"direction: ltr; text-align: left;\">\u06f2\u066b\u06f3\u066b\u06f1 Linear Models and Least Squares . . . . . . . . 11<\/p>\n<p dir=\"ltr\" style=\"direction: ltr; text-align: left;\">\u06f2\u066b\u06f3\u066b\u06f2 Nearest-Neighbor Methods . . . . . . . . . . . . 14<\/p>\n<p dir=\"ltr\" style=\"direction: ltr; text-align: left;\">\u06f2\u066b\u06f3\u066b\u06f3 From Least Squares to Nearest Neighbors . . . . 16<\/p>\n<p dir=\"ltr\" style=\"direction: ltr; text-align: left;\">\u06f2\u066b\u06f4 Statistical Decision Theory . . . . . . . . . . . . . . . . . 18<\/p>\n<p dir=\"ltr\" style=\"direction: ltr; text-align: left;\">\u06f2\u066b\u06f5 Local Methods in High Dimensions . . . . . . . . . . . . . 22<\/p>\n<p dir=\"ltr\" style=\"direction: ltr; text-align: left;\">\u06f2\u066b\u06f6 Statistical Models, Supervised Learning<\/p>\n<p dir=\"ltr\" style=\"direction: ltr; text-align: left;\">and Function Approximation . . . . . . . . . . . . . . . . 28<\/p>\n<p dir=\"ltr\" style=\"direction: ltr; text-align: left;\">\u06f2\u066b\u06f6\u066b\u06f1 A Statistical Model<\/p>\n<p dir=\"ltr\" style=\"direction: ltr; text-align: left;\">for the Joint Distribution Pr(X, Y ) . . . . . . . 28<\/p>\n<p dir=\"ltr\" style=\"direction: ltr; text-align: left;\">\u06f2\u066b\u06f6\u066b\u06f2 Supervised Learning . . . . . . . . . . . . . . . . 29<\/p>\n<p dir=\"ltr\" style=\"direction: ltr; text-align: left;\">\u06f2\u066b\u06f6\u066b\u06f3 Function Approximation . . . . . . . . . . . . . 29<\/p>\n<p dir=\"ltr\" style=\"direction: ltr; text-align: left;\">\u06f2\u066b\u06f7 Structured Regression Models . . . . . . . . . . . . . . . 32<\/p>\n<p dir=\"ltr\" style=\"direction: ltr; text-align: left;\">\u06f2\u066b\u06f7\u066b\u06f1 Difficulty of the Problem . . . . . . . . . . . . . 32<\/p>\n<p dir=\"ltr\" style=\"direction: ltr; text-align: left;\">xiv Contents<\/p>\n<p dir=\"ltr\" style=\"direction: ltr; text-align: left;\">\u06f2\u066b\u06f8 Classes of Restricted Estimators . . . . . . . . . . . . . . 33<\/p>\n<p dir=\"ltr\" style=\"direction: ltr; text-align: left;\">\u06f2\u066b\u06f8\u066b\u06f1 Roughness Penalty and Bayesian Methods . . . 34<\/p>\n<p dir=\"ltr\" style=\"direction: ltr; text-align: left;\">\u06f2\u066b\u06f8\u066b\u06f2 Kernel Methods and Local Regression . . . . . . 34<\/p>\n<p dir=\"ltr\" style=\"direction: ltr; text-align: left;\">\u06f2\u066b\u06f8\u066b\u06f3 Basis Functions and Dictionary Methods . . . . 35<\/p>\n<p dir=\"ltr\" style=\"direction: ltr; text-align: left;\">\u06f2\u066b\u06f9 Model Selection and the Bias\u2013Variance Tradeoff . . . . . 37<\/p>\n<p dir=\"ltr\" style=\"direction: ltr; text-align: left;\">Bibliographic Notes . . . . . . . . . . . . . . . . . . . . . . . . . 39<\/p>\n<p dir=\"ltr\" style=\"direction: ltr; text-align: left;\">Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39<\/p>\n<p dir=\"ltr\" style=\"direction: ltr; text-align: left;\">\u06f3 Linear Methods for Regression 43<\/p>\n<p dir=\"ltr\" style=\"direction: ltr; text-align: left;\">\u06f3\u066b\u06f1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 43<\/p>\n<p dir=\"ltr\" style=\"direction: ltr; text-align: left;\">\u06f3\u066b\u06f2 Linear Regression Models and Least Squares . . . . . . . 44<\/p>\n<p dir=\"ltr\" style=\"direction: ltr; text-align: left;\">\u06f3\u066b\u06f2\u066b\u06f1 Example: Prostate Cancer . . . . . . . . . . . . 49<\/p>\n<p dir=\"ltr\" style=\"direction: ltr; text-align: left;\">\u06f3\u066b\u06f2\u066b\u06f2 The Gauss\u2013Markov Theorem . . . . . . . . . . . 51<\/p>\n<p dir=\"ltr\" style=\"direction: ltr; text-align: left;\">\u06f3\u066b\u06f2\u066b\u06f3 Multiple Regression<\/p>\n<p dir=\"ltr\" style=\"direction: ltr; text-align: left;\">from Simple Univariate Regression . . . . . . . . 52<\/p>\n<p dir=\"ltr\" style=\"direction: ltr; text-align: left;\">\u06f3\u066b\u06f2\u066b\u06f4 Multiple Outputs . . . . . . . . . . . . . . . . . 56<\/p>\n<p dir=\"ltr\" style=\"direction: ltr; text-align: left;\">\u06f3\u066b\u06f3 Subset Selection . . . . . . . . . . . . . . . . . . . . . . . 57<\/p>\n<p dir=\"ltr\" style=\"direction: ltr; text-align: left;\">\u06f3\u066b\u06f3\u066b\u06f1 Best-Subset Selection . . . . . . . . . . . . . . . 57<\/p>\n<p dir=\"ltr\" style=\"direction: ltr; text-align: left;\">\u06f3\u066b\u06f3\u066b\u06f2 Forward- and Backward-Stepwise Selection . . . 58<\/p>\n<p dir=\"ltr\" style=\"direction: ltr; text-align: left;\">\u06f3\u066b\u06f3\u066b\u06f3 Forward-Stagewise Regression . . . . . . . . . . 60<\/p>\n<p dir=\"ltr\" style=\"direction: ltr; text-align: left;\">\u06f3\u066b\u06f3\u066b\u06f4 Prostate Cancer Data Example (Continued) . . 61<\/p>\n<p dir=\"ltr\" style=\"direction: ltr; text-align: left;\">\u06f3\u066b\u06f4 Shrinkage Methods . . . . . . . . . . . . . . . . . . . . . . 61<\/p>\n<p dir=\"ltr\" style=\"direction: ltr; text-align: left;\">\u06f3\u066b\u06f4\u066b\u06f1 Ridge Regression . . . . . . . . . . . . . . . . . 61<\/p>\n<p dir=\"ltr\" style=\"direction: ltr; text-align: left;\">\u06f3\u066b\u06f4\u066b\u06f2 The Lasso . . . . . . . . . . . . . . . . . . . . . 68<\/p>\n<p dir=\"ltr\" style=\"direction: ltr; text-align: left;\">\u06f3\u066b\u06f4\u066b\u06f3 Discussion: Subset Selection, Ridge Regression<\/p>\n<p dir=\"ltr\" style=\"direction: ltr; text-align: left;\">and the Lasso . . . . . . . . . . . . . . . . . . . 69<\/p>\n<p dir=\"ltr\" style=\"direction: ltr; text-align: left;\">\u06f3\u066b\u06f4\u066b\u06f4 Least Angle Regression . . . . . . . . . . . . . . 73<\/p>\n<p dir=\"ltr\" style=\"direction: ltr; text-align: left;\">\u06f3\u066b\u06f5 Methods Using Derived Input Directions . . . . . . . . . 79<\/p>\n<p dir=\"ltr\" style=\"direction: ltr; text-align: left;\">\u06f3\u066b\u06f5\u066b\u06f1 Principal Components Regression . . . . . . . . 79<\/p>\n<p dir=\"ltr\" style=\"direction: ltr; text-align: left;\">\u06f3\u066b\u06f5\u066b\u06f2 Partial Least Squares . . . . . . . . . . . . . . . 80<\/p>\n<p dir=\"ltr\" style=\"direction: ltr; text-align: left;\">\u06f3\u066b\u06f6 Discussion: A Comparison of the Selection<\/p>\n<p dir=\"ltr\" style=\"direction: ltr; text-align: left;\">and Shrinkage Methods . . . . . . . . . . . . . . . . . . . 82<\/p>\n<p dir=\"ltr\" style=\"direction: ltr; text-align: left;\">\u06f3\u066b\u06f7 Multiple Outcome Shrinkage and Selection . . . . . . . . 84<\/p>\n<p dir=\"ltr\" style=\"direction: ltr; text-align: left;\">\u06f3\u066b\u06f8 More on the Lasso and Related Path Algorithms . . . . . 86<\/p>\n<p dir=\"ltr\" style=\"direction: ltr; text-align: left;\">\u06f3\u066b\u06f8\u066b\u06f1 Incremental Forward Stagewise Regression . . . 86<\/p>\n<p dir=\"ltr\" style=\"direction: ltr; text-align: left;\">\u06f3\u066b\u06f8\u066b\u06f2 Piecewise-Linear Path Algorithms . . . . . . . . 89<\/p>\n<p dir=\"ltr\" style=\"direction: ltr; text-align: left;\">\u06f3\u066b\u06f8\u066b\u06f3 The Dantzig Selector . . . . . . . . . . . . . . . 89<\/p>\n<p dir=\"ltr\" style=\"direction: ltr; text-align: left;\">\u06f3\u066b\u06f8\u066b\u06f4 The Grouped Lasso . . . . . . . . . . . . . . . . 90<\/p>\n<p dir=\"ltr\" style=\"direction: ltr; text-align: left;\">\u06f3\u066b\u06f8\u066b\u06f5 Further Properties of the Lasso . . . . . . . . . . 91<\/p>\n<p dir=\"ltr\" style=\"direction: ltr; text-align: left;\">\u06f3\u066b\u06f8\u066b\u06f6 Pathwise Coordinate Optimization . . . . . . . . 92<\/p>\n<p dir=\"ltr\" style=\"direction: ltr; text-align: left;\">\u06f3\u066b\u06f9 Computational Considerations . . . . . . . . . . . . . . . 93<\/p>\n<p dir=\"ltr\" style=\"direction: ltr; text-align: left;\">Bibliographic Notes . . . . . . . . . . . . . . . . . . . . . . . . . 94<\/p>\n<p dir=\"ltr\" style=\"direction: ltr; text-align: left;\">Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94<\/p>\n<p dir=\"ltr\" style=\"direction: ltr; text-align: left;\">Contents xv<\/p>\n<p dir=\"ltr\" style=\"direction: ltr; text-align: left;\">\u06f4 Linear Methods for Classification 101<\/p>\n<p dir=\"ltr\" style=\"direction: ltr; text-align: left;\">\u06f4\u066b\u06f1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 101<\/p>\n<p dir=\"ltr\" style=\"direction: ltr; text-align: left;\">\u06f4\u066b\u06f2 Linear Regression of an Indicator Matrix . . . . . . . . . 103<\/p>\n<p dir=\"ltr\" style=\"direction: ltr; text-align: left;\">\u06f4\u066b\u06f3 Linear Discriminant Analysis . . . . . . . . . . . . . . . . 106<\/p>\n<p dir=\"ltr\" style=\"direction: ltr; text-align: left;\">\u06f4\u066b\u06f3\u066b\u06f1 Regularized Discriminant Analysis . . . . . . . . 112<\/p>\n<p dir=\"ltr\" style=\"direction: ltr; text-align: left;\">\u06f4\u066b\u06f3\u066b\u06f2 Computations for LDA . . . . . . . . . . . . . . 113<\/p>\n<p dir=\"ltr\" style=\"direction: ltr; text-align: left;\">\u06f4\u066b\u06f3\u066b\u06f3 Reduced-Rank Linear Discriminant Analysis . . 113<\/p>\n<p dir=\"ltr\" style=\"direction: ltr; text-align: left;\">\u06f4\u066b\u06f4 Logistic Regression . . . . . . . . . . . . . . . . . . . . . . 119<\/p>\n<p dir=\"ltr\" style=\"direction: ltr; text-align: left;\">\u06f4\u066b\u06f4\u066b\u06f1 Fitting Logistic Regression Models . . . . . . . . 120<\/p>\n<p dir=\"ltr\" style=\"direction: ltr; text-align: left;\">\u06f4\u066b\u06f4\u066b\u06f2 Example: South African Heart Disease . . . . . 122<\/p>\n<p dir=\"ltr\" style=\"direction: ltr; text-align: left;\">\u06f4\u066b\u06f4\u066b\u06f3 Quadratic Approximations and Inference . . . . 124<\/p>\n<p dir=\"ltr\" style=\"direction: ltr; text-align: left;\">\u06f4\u066b\u06f4\u066b\u06f4 L1 Regularized Logistic Regression . . . . . . . . 125<\/p>\n<p dir=\"ltr\" style=\"direction: ltr; text-align: left;\">\u06f4\u066b\u06f4\u066b\u06f5 Logistic Regression or LDA? . . . . . . . . . . . 127<\/p>\n<p dir=\"ltr\" style=\"direction: ltr; text-align: left;\">\u06f4\u066b\u06f5 Separating Hyperplanes . . . . . . . . . . . . . . . . . . . 129<\/p>\n<p dir=\"ltr\" style=\"direction: ltr; text-align: left;\">\u06f4\u066b\u06f5\u066b\u06f1 Rosenblatt\u2019s Perceptron Learning Algorithm . . 130<\/p>\n<p dir=\"ltr\" style=\"direction: ltr; text-align: left;\">\u06f4\u066b\u06f5\u066b\u06f2 Optimal Separating Hyperplanes . . . . . . . . . 132<\/p>\n<p dir=\"ltr\" style=\"direction: ltr; text-align: left;\">Bibliographic Notes . . . . . . . . . . . . . . . . . . . . . . . . . 135<\/p>\n<p dir=\"ltr\" style=\"direction: ltr; text-align: left;\">Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135<\/p>\n<p dir=\"ltr\" style=\"direction: ltr; text-align: left;\">\u06f5 Basis Expansions and Regularization 139<\/p>\n<p dir=\"ltr\" style=\"direction: ltr; text-align: left;\">\u06f5\u066b\u06f1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 139<\/p>\n<p dir=\"ltr\" style=\"direction: ltr; text-align: left;\">\u06f5\u066b\u06f2 Piecewise Polynomials and Splines . . . . . . . . . . . . . 141<\/p>\n<p dir=\"ltr\" style=\"direction: ltr; text-align: left;\">\u06f5\u066b\u06f2\u066b\u06f1 Natural Cubic Splines . . . . . . . . . . . . . . . 144<\/p>\n<p dir=\"ltr\" style=\"direction: ltr; text-align: left;\">\u06f5\u066b\u06f2\u066b\u06f2 Example: South African Heart Disease (Continued)146<\/p>\n<p dir=\"ltr\" style=\"direction: ltr; text-align: left;\">\u06f5\u066b\u06f2\u066b\u06f3 Example: Phoneme Recognition . . . . . . . . . 148<\/p>\n<p dir=\"ltr\" style=\"direction: ltr; text-align: left;\">\u06f5\u066b\u06f3 Filtering and Feature Extraction . . . . . . . . . . . . . . 150<\/p>\n<p dir=\"ltr\" style=\"direction: ltr; text-align: left;\">\u06f5\u066b\u06f4 Smoothing Splines . . . . . . . . . . . . . . . . . . . . . . 151<\/p>\n<p dir=\"ltr\" style=\"direction: ltr; text-align: left;\">\u06f5\u066b\u06f4\u066b\u06f1 Degrees of Freedom and Smoother Matrices . . . 153<\/p>\n<p dir=\"ltr\" style=\"direction: ltr; text-align: left;\">\u06f5\u066b\u06f5 Automatic Selection of the Smoothing Parameters . . . . 156<\/p>\n<p dir=\"ltr\" style=\"direction: ltr; text-align: left;\">\u06f5\u066b\u06f5\u066b\u06f1 Fixing the Degrees of Freedom . . . . . . . . . . 158<\/p>\n<p dir=\"ltr\" style=\"direction: ltr; text-align: left;\">\u06f5\u066b\u06f5\u066b\u06f2 The Bias\u2013Variance Tradeoff . . . . . . . . . . . . 158<\/p>\n<p dir=\"ltr\" style=\"direction: ltr; text-align: left;\">\u06f5\u066b\u06f6 Nonparametric Logistic Regression . . . . . . . . . . . . . 161<\/p>\n<p dir=\"ltr\" style=\"direction: ltr; text-align: left;\">\u06f5\u066b\u06f7 Multidimensional Splines . . . . . . . . . . . . . . . . . . 162<\/p>\n<p dir=\"ltr\" style=\"direction: ltr; text-align: left;\">\u06f5\u066b\u06f8 Regularization and Reproducing Kernel Hilbert Spaces . 167<\/p>\n<p dir=\"ltr\" style=\"direction: ltr; text-align: left;\">\u06f5\u066b\u06f8\u066b\u06f1 Spaces of Functions Generated by Kernels . . . 168<\/p>\n<p dir=\"ltr\" style=\"direction: ltr; text-align: left;\">\u06f5\u066b\u06f8\u066b\u06f2 Examples of RKHS . . . . . . . . . . . . . . . . 170<\/p>\n<p dir=\"ltr\" style=\"direction: ltr; text-align: left;\">\u06f5\u066b\u06f9 Wavelet Smoothing . . . . . . . . . . . . . . . . . . . . . 174<\/p>\n<p dir=\"ltr\" style=\"direction: ltr; text-align: left;\">\u06f5\u066b\u06f9\u066b\u06f1 Wavelet Bases and the Wavelet Transform . . . 176<\/p>\n<p dir=\"ltr\" style=\"direction: ltr; text-align: left;\">\u06f5\u066b\u06f9\u066b\u06f2 Adaptive Wavelet Filtering . . . . . . . . . . . . 179<\/p>\n<p dir=\"ltr\" style=\"direction: ltr; text-align: left;\">Bibliographic Notes . . . . . . . . . . . . . . . . . . . . . . . . . 181<\/p>\n<p dir=\"ltr\" style=\"direction: ltr; text-align: left;\">Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181<\/p>\n<p dir=\"ltr\" style=\"direction: ltr; text-align: left;\">Appendix: Computational Considerations for Splines . . . . . . 186<\/p>\n<p dir=\"ltr\" style=\"direction: ltr; text-align: left;\">Appendix: B-splines . . . . . . . . . . . . . . . . . . . . . 186<\/p>\n<p dir=\"ltr\" style=\"direction: ltr; text-align: left;\">Appendix: Computations for Smoothing Splines . . . . .<\/p>\n<p dir=\"ltr\" style=\"direction: ltr; text-align: left;\">PrefaSplines . . . . .<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Download PDF : \u00a0The Elements of Statistical Learning \u062a\u0639\u0631\u06cc\u0641 \u06cc\u0627\u062f\u06af\u06cc\u0631\u06cc \u0628\u0627 \u0646\u0638\u0627\u0631\u062a\u00a0Supervised learning<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[3],"tags":[],"class_list":["post-12","post","type-post","status-publish","format-standard","hentry","category-esl","category-3-id","post-seq-1","post-parity-odd","meta-position-corners","fix"],"_links":{"self":[{"href":"https:\/\/vua.nadiran.com\/index.php?rest_route=\/wp\/v2\/posts\/12"}],"collection":[{"href":"https:\/\/vua.nadiran.com\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/vua.nadiran.com\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/vua.nadiran.com\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/vua.nadiran.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=12"}],"version-history":[{"count":9,"href":"https:\/\/vua.nadiran.com\/index.php?rest_route=\/wp\/v2\/posts\/12\/revisions"}],"predecessor-version":[{"id":15,"href":"https:\/\/vua.nadiran.com\/index.php?rest_route=\/wp\/v2\/posts\/12\/revisions\/15"}],"wp:attachment":[{"href":"https:\/\/vua.nadiran.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=12"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/vua.nadiran.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=12"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/vua.nadiran.com\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=12"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}